entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
186
| authors
sequencelengths 1
769
| primary_category
stringclasses 96
values | categories
sequencelengths 1
6
| text
stringlengths 3
512k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/1701.07861v2 | 20170126200818 | Diversity and coevolutionary dynamics in high-dimensional phenotype spaces | [
"Michael Doebeli",
"Iaroslav Ispolatov"
] | q-bio.PE | [
"q-bio.PE"
] |
Δ
δ̣
ρ̊
π
α
γ
→
σ
β̱
ϵ
Γ
ω
łλ
ϕ
ψ
μ
τ
χ̧
1cm
Diversity and coevolutionary dynamics in high-dimensional phenotype spaces
Michael Doebeli^∗ & Iaroslav Ispolatov^∗∗
^∗ Departments of Zoology and Mathematics, University
of British Columbia,
6270 University Boulevard, Vancouver B.C. Canada, V6T 1Z4; doebeli@zoology.ubc.ca
^∗∗ Departamento de Fisica, Universidad de Santiago de Chile
Casilla 302, Correo 2, Santiago, Chile; jaros007@gmail.com
December 30, 2023
================================================================================================================================================================================================================================================================================================================================================
10mm
10 mm
Supporting Material: Appendix A
10 mm
RH: Diversity and coevolutionary dynamics
10 mm
Corresponding author: Michael Doebeli, Department of Zoology and Department of
Mathematics, University of British Columbia, 6270 University Boulevard, Vancouver B.C. Canada, V6T 1Z4, Email: doebeli@zoology.ubc.ca.
Abstract
0.3cm
We study macroevolutionary dynamics by extending microevolutionary competition models to long time scales. It has been shown that for a general class of competition models, gradual evolutionary change in continuous phenotypes (evolutionary dynamics) can be non-stationary and even chaotic when the dimension of the phenotype space in which the evolutionary dynamics unfold is high. It has also been shown that evolutionary diversification can occur along non-equilibrium trajectories in phenotype space.
We combine these lines of thinking by studying long-term coevolutionary dynamics of emerging lineages in multi-dimensional phenotype spaces. We use a statistical approach to investigate the evolutionary dynamics of many different systems. We find: 1) for a given dimension of phenotype space, the coevolutionary dynamics tends to be fast and non-stationary for an intermediate number of coexisting lineages, but tends to stabilize as the evolving communities reach a saturation level of diversity; and 2) the amount of diversity at the saturation level increases rapidly (exponentially) with the dimension of phenotype space.
These results have implications for theoretical perspectives on major macroevolutionary patterns such as adaptive radiation, long-term temporal patterns of phenotypic changes, and the evolution of diversity.
1 cm
Keywords: Long-term evolution | Diversity and stability | Adaptive radiation
§ INTRODUCTION
One of the fundamental problems in evolutionary biology is to understand how microevolutionary processes generate macroevolutionary patterns. In particular, the emergence of macroevolutionary changes in the speed of evolution <cit.>, and of macroevolutionary changes in patterns of species diversity <cit.> have long been of great interest. For example, <cit.> have recently proposed that over macroevolutionary time scales, relatively short intermittent bursts of high rates of evolutionary change should alternate with long periods of bounded phenotypic fluctuations.
Also, there is much discussion about whether species diversity saturates over evolutionary time in a given environment <cit.>. Phylogenetic analysis has been used to shed light on these questions <cit.>, but mechanistic models in which short-term ecological interactions are extrapolated to yield long-term patterns of diversity and evolutionary change have only recently been developed. Most of these models have been used to study the long-term evolution of diversity by analyzing processes of community assembly emerging from short-term ecological dynamics <cit.>.
In particular, these papers have mainly focussed on how diversity changes over time, but not on how the nature of the coevolutionary dynamics of a given set of coexisting species changes as the diversity changes. In fact, in all these models, the evolutionary dynamics for a fixed amount of diversity, i.e., for a given set of species, converge to an equilibrium. However, if one wants to understand macroevolutionary changes in the “tempo and mode” <cit.> of evolution, one not only needs to consider how diversity changes over evolutionary time, but also how such changes in diversity affect the nature of evolutionary dynamics <cit.>. Indeed, there is evidence from evolution experiments with microbes that evolutionary dynamics in more diverse communities are qualitatively different from the evolutionary dynamics in less diverse communities <cit.>. Here we present a theoretical investigation of the questions of how diversity affects the complexity of coevolutionary dynamics.
In general, the number of different phenotypes that affect ecological and evolutionary processes is an important quantity. For example, determining the dimensionality of niche space in ecological food webs is a classical problem <cit.>, and it has recently been shown that including more phenotypic dimensions in models for community assembly has a strong effect on the structure of the emerging food webs <cit.>. Implicitly, the importance of the dimension of phenotype space is also acknowledged in phylogenetic research through the notion of “adaptive zones” <cit.>. In particular, it is thought that much of the extant diversity has evolved as a consequence of lineages entering new adaptive zones, which can be interpreted from the phenotypic perspective as an increase in the dimension of phenotype space. In general, given the large number of phenotypic properties that determine an individual's life history and ecology in almost any species, one would expect that ecological interactions are generally determined by many phenotypic properties, and that selection pressures emerging from ecological interactions in turn affect many phenotypes simultaneously.
For example, comprehensive modelling of the metabolic network in E. coli cells comprises more than 2000 reactions <cit.>. These reactions are in turn controlled by thousands of genes in a complicated interaction network whose exact workings are largely unknown. Nevertheless, many of the genes contributing to this network of metabolic reactions will be under selection in any given environmental setting, and as a consequence, a large number of phenotypic properties have the potential to undergo evolutionary change. It is generally not known how exactly these phenotypic properties impinge on birth and death rates of individual organisms, and hence what exactly the ecological selection pressures are on these properties. Nevertheless, it seems clear that in general, many phenotypes will evolve at the same time, i.e., that evolution generally takes place in high-dimensional phenotype spaces.
We have recently argued that if evolution takes place in high-dimensional phenotype spaces, then the evolutionary dynamics, that is, the phenotypic change over evolutionary time, can be very complicated, i.e., non-stationary and often chaotic <cit.>. In low-dimensional phenotype spaces, non-equilibrium evolutionary dynamics are less likely. However, if a species evolving on a simple attractor gives rise to diversification, the effective dimensionality of the evolving system increases, as the species that emerge from diversification coevolve, driven by both intra- and interspecific ecological interactions.
Thus the total dimensionality of the resulting dynamical system describing multispecies coevolution is the number of species times the dimensionality of the phenotype space in which each species evolves. Based on our earlier results <cit.>, one could then expect that due to the increase in dimensionality, diversification leads to more complicated evolutionary dynamics in each of the coevolving species. On the other hand, as a multispecies community becomes more diverse and evolves towards saturation, the available niches tend to get filled, and hence evolutionary change has to become highly coordinated between interacting species and thus constrained, potentially leading to simplified evolutionary dynamics. It is thus unclear how the nature of the evolutionary dynamics changes as the pattern of diversity changes during community assembly.
We investigate these issues by applying the framework of adaptive dynamics <cit.> to a general class of competition models. The main question we address is, how does the complexity of long-term coevolutionary dynamics depend on the diversity of the coevolving community? We show that in low-dimensional phenotype spaces, there is a humped-shaped relationship between diversity and the complexity of evolutionary dynamics: in communities with low diversity, coevolutionary dynamics are often simple, i.e., stationary in the long-time limit; for intermediate degrees of diversity, non-stationary (complex) coevolutionary dynamics are common, and each of the species in the community evolves on a complicated trajectory in phenotype space; and for high amounts of diversity, coevolutionary dynamics become simple again, i.e., stationary. In particular, as communities reach diversity saturation, e.g. through adaptive diversification <cit.>, coevolutionary dynamics change from complex to simple.
Our results are relevant for a number of issues concerning patterns of macroevolution. For example, the results suggest that during processes of adaptive radiation <cit.>, evolutionary dynamics are more complicated early in the radiation than late in the radiation, a pattern that corresponds to the “early-burst” perspective of macroevolution that has attracted much attention in recent years <cit.>. Our results also show that the level at which diversity saturates depends on the dimensionality of phenotype space, with higher dimensions allowing for more diversity. This observation is in accordance with data from radiations in fishes <cit.> and points to the possibility of a microevolutionary mechanism for the “blunderbass theory" of temporal patterns of macroevolutionary changes and diversification <cit.>: if evolution operates on the dimension of phenotype space on a very slow time scale, then on shorter time scales diversity may saturate and thereby generate relatively stationary evolutionary dynamics, whereas on longer time scales the dimension of phenotype space may increase, e.g. due to gene duplications, thus generating a new burst of non-equilibrium (co-)evolutionary dynamics until the diversity reaches a new saturation level. Such patterns of intermittent bursts have recently been found in the phylogenies of birds and echinoids <cit.>, and the bursts have been attributed to the evolution of flight capabilities and of novel feeding techniques, respectively, both of which can be interpreted as an increase in the dimensionality of the relevant phenotype space. This perspective may also shed light on the question of whether diversity saturates or not <cit.>: diversity may saturate for a given dimension of phenotype space, but evolutionary innovation in the form of new phenotypic dimensions may intermittently generate room for additional bouts of evolutionary diversification.
§ METHODS
§.§ Single-cluster adaptive dynamics
As in <cit.>, we study a general class of models for frequency-dependent competition in which ecological interactions are determined by d-dimensional phenotypes, where d≥1. For simplicity, we consider homogeneous systems, so no spatial coordinates are included. The ecological interactions are described by a competition kernel α(𝐱, 𝐲) and by a carrying capacity K(𝐱), where 𝐱,𝐲∈ℝ^d are the d-dimensional continuous phenotypes of competing individuals. The competition kernel α measures the competitive impact that an individual of phenotype 𝐱 has on an individual of phenotype 𝐲, and we assume that α(𝐱, 𝐱)=1 for all 𝐱. Assuming logistic ecological dynamics, K(𝐱) is then the equilibrium density of a population that is monomorphic for phenotype 𝐱. The adaptive dynamics of the phenotype 𝐱 is a system of differential equations for d𝐱/dt. To derive the adaptive dynamics, one defines the invasion fitness f(𝐱, 𝐲) as the per capita growth rate of a rare mutant phenotype 𝐲 in the monomorphic resident 𝐱 population that is at its ecological equilibrium K(𝐱):
f(𝐱, 𝐲) = 1 - α(𝐱, 𝐲) K(𝐱)/K(𝐲).
The expression for the invasion fitness reflects the fact that the growth rate of the mutant 𝐲 is negatively affected by the effective density experienced by the mutant, α(𝐱, 𝐲) K(𝐱), discounted by the carrying capacity K(𝐲) of the mutant (see <cit.> for more details). Note that f(𝐱, 𝐱)=0 for all 𝐱. The invasion fitness f(𝐱, 𝐲) gives rise to the selection gradients in the i=1,...,d phenotypic components:
s_i(𝐱) ≡∂ f(𝐱, 𝐲)/∂ y_i |_𝐲=𝐱 = - ∂α(𝐱, 𝐲)/∂ y_i |_𝐲=𝐱 + ∂ K(𝐱)/∂ x_i1/K(𝐱),
The selection gradients in turn define the adaptive dynamics as a system of differential equations on phenotype space ℝ^d, which is given by
d𝐱/dt = 𝐌(𝐱)·𝐬(𝐱).
Here 𝐬(𝐱) is the column vector (s_1(𝐱),...,s_d(𝐱)), and 𝐌(𝐱) is the mutational variance-covariance matrix. In this matrix, the diagonal elements contain information about the size and rate of mutations in each of the phenotypic dimensions, whereas the off-diagonal elements contain information about the covariance between mutations in two different phenotypic dimensions. This matrix essentially captures “evolvability” of a population and generally depends on the current resident phenotype 𝐱, and influences the speed and direction of evolution. For simplicity, we assume here that this matrix is the identity matrix. For more details on the derivation of the adaptive dynamics (<ref>) we refer to a large body of primary literature (e.g. <cit.>). We note that the adaptive dynamics (<ref>) can be derived analytically as a large-population limit of an underlying stochastic, individual-based model that is again defined based on the competition kernel α(𝐱, 𝐲) and the carrying capacity K(𝐱) <cit.>.
Specifically, here we consider a class of systems that are defined by competition kernels of the form
(𝐱,𝐲)=exp [∑_i,j=1^d
b_ij(x_i-y_i)x_j
-∑_i=1^d(x_i-y_i)^2/2_i^2].
Here the coefficients b_ij in the first sum on the right hand side are arbitrary and correspond to the simplest form of a generic, non-symmetric competition kernel that can generate non-stationary evolutionary dynamics. It can be interpreted as the lowest-order (non-trivial) term from a Taylor expansion of an unknown non-symmetric competition function. Adaptive dynamics of asymmetric competition has been studied quite extensively (e.g. <cit.>), and is necessary to generate single-species non-equilibrium dynamics in high-dimensional phenotype spaces <cit.>.
The second sum on the right hand side represents “Gaussian competition”, according to which the competitive impact between individuals increases with phenotypic similarity between the competing individuals. The parameters _i measure how fast the effect of competition declines as phenotypic distance in the i-component increases. For the carrying capacity we assume
K(𝐱)=exp(-∑_i^d x_i^4/4).
This implies that the carrying capacity imposes an element of stabilizing selection for the phenotype 𝐱=0, at which the carrying capacity is maximal. Thus, the frequency-dependent component of selection is generated by the competition kernel, whereas the frequency-independent component of selection is due to the carrying capacity. With these assumptions, the adaptive dynamics (<ref>) become
d x_i/dt=∑_i=1^d b_ijx_j - x_i^3, i=1,...,d.
We note that the terms -x_i^3 in (<ref>) are due to the carrying capacity and serve to contain the trajectories of (<ref>) in a bounded domain of phenotype space. Also, the Gaussian part of the competition kernel does not affect the adaptive dynamics of monomorphic populations, i.e., the _i do not appear in (<ref>), because the Gaussian part always has a maximum at the current resident, and hence the corresponding first derivative in the selection gradient (<ref>) is 0.
The system of ODEs (<ref>) describes the trajectory of an evolving monomorphic population in phenotype space ℝ^d. In <cit.> we have shown that for general competition kernels such trajectories can be very complicated, particularly when the dimension d is large. With complex evolutionary dynamics, trajectories can be quasi-periodic or chaotic, and typically visit many different regions of phenotype space over evolutionary time. When d is low the dynamics tend to be simpler, and often converge to an equilibrium attractor. We can assess the likelihood of equilibrium dynamics for a given dimension d by choosing the d^2 coefficients b_ij in (<ref>) randomly and independently, e.g. from a normal distribution with mean 0 and variance 1, solving the resulting adaptive dynamics (<ref>) and checking whether it converges to an equilibrium. If this is done repeatedly, we can approximate the probability of equilibrium dynamics as the fraction of runs that converged to an equilibrium. For d=1 the probability of equilibrium dynamics is of course 1, and for d=2,3,4, the resulting probabilities of equilibrium dynamics are approximately 85%, 81% and 74%, respectively. These are the dimensions that we will primarily use in the analysis presented below, but we note that the probability of equilibrium dynamics goes to 0 for large d <cit.>.
§.§ Multi-cluster adaptive dynamics
Here we are interested in the question of how diversification and subsequent coexistence of species (also called phenotypic clusters or simply clusters through the text) affects the evolutionary dynamics. While the Gaussian term in the competition kernel (<ref>) does not affect the adaptive dynamics of single monomorphic populations, this term is crucial for determining whether evolutionary diversification occurs. For one-dimensional phenotype spaces (d=1) this is very well known and is encapsulated in the concept of evolutionary branching <cit.>. An evolutionary branching point is an equilibrium point of (<ref>) that is both an attractor for the adaptive dynamics and a fitness minimum. The reason that such points exist in the competition models considered here is precisely that the Gaussian term does not affect the adaptive dynamics, but does affect the curvature of the fitness landscape, i.e., the second derivative of the invasion fitness (<ref>). In particular, small enough _i's in the Gaussian term will make any equilibrium point a fitness minimum, and hence will give rise to evolutionary diversification. Evolutionary branching in scalar traits has been described in a plethora of different models (for an overview we refer to Eva Kisdi's website at the Department of Mathematics and Statistics at the University of Helsinki, http://www.mv.helsinki.fi/home/kisdi/addyn.htm). In high-dimensional phenotype spaces, equilibrium points of (<ref>) can also be fitness minima along some directions in phenotype space. For this to happen the Hessian matrix of second derivatives of the invasion fitness (<ref>), evaluated at the equilibrium, must have positive eigenvalues. Indeed, in higher dimensional phenotype spaces the conditions for the existence of positive eigenvalues of this Hessian matrix, and hence for diversification, generally become less stringent <cit.>.
Importantly, evolutionary diversification can also occur from non-equilibrium adaptive dynamics trajectories <cit.>. If the adaptive dynamics (<ref>) exhibit non-equilibrium dynamics, the crucial quantity determining whether diversification occurs is again the Hessian matrix of second derivatives of the invasion fitness (<ref>), but now restricted to the subspace of phenotype space that is orthogonal to the selection gradient <cit.>. Essentially, diversification can occur in orthogonal directions in which this Hessian has positive curvature, and hence in which the invasion fitness has a minimum. Because the population is still evolving along the selection gradient, elucidating the exact conditions for diversification requires a careful analysis <cit.>. In the present context, the implication of these results is that, just as with equilibrium adaptive dynamics, diversification can occur along non-equilibrium trajectories of (<ref>) if the _i in the Gaussian term of the competition kernel are small enough, i.e., if the frequency dependence generated by Gaussian competition is strong enough <cit.>.
To investigate the process of diversification and the subsequent coevolutionary dynamics, we extend the adaptive dynamics (<ref>) to several coexisting phenotypic clusters as follows. We assume that an evolving community consists of m monomorphic populations, each given by a phenotype 𝐱_r, r=1,...,m, with phenotypic components x_ri, i=1,...d (where d is the dimension of phenotype space). Let N_r be the population density of cluster 𝐱_r. Then the ecological dynamics of the m clusters are given by the system of logistic differential equations
d N_r(t)/ d t = N_r( t)( 1 - ∑_s=1^m (𝐱_s, 𝐱_r) N_s ( t)/K(𝐱_r)), r=1,...,m.
Let N_r^*, r=1,...,m denote the equilibrium of system (<ref>) (more generally, for the purposes of deriving the adaptive dynamics, the quantities N_r^* are suitable time averages of population densities over the ecological attractor of (<ref>); however, our extensive numerical simulations indicated that (<ref>) always converges to an equilibrium). Making the traditional adaptive dynamics assumption that ecological dynamics occur on a faster time scale than evolutionary dynamics, we calculate the invasion fitness function in cluster r based on the densities N_r^* of the various clusters:
f(𝐱_1,...,𝐱_m,𝐱_r')=1 - ∑_s=1^m (𝐱_s,𝐱_r')N_s^*/K(𝐱_r').
Here 𝐱_1,...,𝐱_m describe the phenotypic state of the resident population, and 𝐱_r' denotes the mutant trait in cluster r, r=1,...,m.
Taking the derivative of (<ref>) with respect to 𝐱_r' and
evaluating it at the resident, 𝐱_r'=𝐱_r, yields the components of the selection gradient 𝐬_r for the cluster r as:
s_ri= ∑_sN_s^*(- 1/K(𝐱_r).∂(𝐱_s,𝐱_r')/∂
x_ri'|_𝐱_r'=𝐱_r +
(𝐱_s,𝐱_r)/K^2(𝐱_r)∂
K(𝐱_r)/∂ x_ri), i=1,...,d.
For coevolutionary adaptive dynamics, one has to take into account that the rate of mutations in each evolving phenotypic cluster is proportional to the current population size of that cluster <cit.>, and hence that the speed of evolution is influenced by the population size. In the single-cluster system such consideration only rescales time without affecting the geometry of the trajectory and thus is usually ignored. However, in the multi-cluster system, instead of assuming that the mutational process is described by the identity matrix as in (<ref>), we now assume that in each cluster r, the mutational variance-covariance matrix M_r is a diagonal matrix with entries N_r^*. This generates the following d· m differential equations describing the adaptive dynamics in the coevolving community:
d x_ri/dt= N_r^* s_ri, i=1,…,d, r=1,…, m.
For the multicluster adaptive dynamics, the equation (<ref>,<ref>,<ref>) replace their single-cluster analogs (<ref>,<ref>,<ref>).
It is important to note that the Gaussian part of the competition kernel not only affects whether diversification occurs, but in contrast to the adaptive dynamics of single monomorphic populations, the Gaussian term will indeed affect the coevolutionary adaptive dynamics (<ref>) of the phenotypic clusters that coexist after diversification has occurred, because it affects both the ecological dynamics (<ref>) and the selection gradient (<ref>).
§.§ Numerical procedure
To study diversification and subsequent multi-cluster adaptive dynamics, we
implemented the following iterative numerical scenario:
0.5 cm
Step 1: Each simulation run is initiated with a randomly generated
d × d matrix of the coefficients b_ij for
the competition kernel (<ref>). The coefficients are drawn from a
Gaussian distribution with zero mean and d^-1/2 variance. As
explained in <cit.>, this is done to keep the sum of the d terms
∑_j=1^d b_ij x_j in (<ref>) of order x_i, i.e. independent of d.
Then a certain number of clusters, given by a parameter m_0, each with population size of order 1, are randomly placed
near the phenotype 0, i.e., near the maximum of the carrying capacity.
0.5 cm
Step 2: For a given set of phenotypic clusters, the population dynamics of all clusters is solved using the ecological dynamics (<ref>). The system of differential equations is integrated using a
4th-order Runge-Kutta algorithm for ∼ 10^3 time steps of duration dt ∼ 10^-2 to ensure convergence
to the equilibrium (or, in case there is no such convergence, to ensure a correct calculation of the time average of the various population
densities). If the population density of a given
cluster falls below the threshold N_min∼10^-8, the
cluster is eliminated from the system. During the ecological
dynamics the evolutionary dynamics is frozen and evolutionary
time does not advance.
0.5 cm
Step 3: After calculating the N_r^*, r=1,...,m (where m is the current number of clusters), the adaptive dynamics of the phenotypes of the clusters is advanced via
(<ref>,<ref>) using a 4th-order Runge-Kutta algorithm with a typical
time-step d∼ 10^-2, by which the evolutionary time is
advanced as well. After this evolutionary time step, the ecological dynamics are recalculated, potentially preceded by the following step 4, which is only performed if the corresponding time condition is satisfied.
0.5 cm
Step 4: The level of diversity, i.e., the number of clusters in the system, is controlled as follows. Each _c time units the distances between clusters are assessed.
If the distance between two or more clusters is below a
threshold
x ∼ 10^-3, these
clusters are merged, preserving the total population size of the merged clusters and the position of their centre of mass. Immediately after this comparison step, the total number of
clusters is compared to the target number of clusters, which is given by a system parameter m_max. If the current number of clusters is below m_max, a new
cluster is created by randomly picking an existing cluster,
splitting it in half and separating the two new clusters in a random direction in phenotype space by
the distance of the merging threshold, x.
0.5 cm
Step 5: In our simulations, we take measurements at regular time intervals (ranging from _m∼ 1 - 10 time units). One of the main quantities of interest is the average per capita evolutionary speed v in the evolving community, which is the average of the norms of the vectors of trait variation (evolution) rates in each cluster, weighted by the cluster
population size, computed as
v=∑_r=1^mN_r√(∑_i=1^d (d x_ri/dt)^2)/∑_r=1^m N_r
This quantity is a strong indicator of the nature of the evolutionary dynamics of the coevolving system. In particular, our very extensive numerical simulations indicate that when the average speed falls below 10^-2, then the system eventually exhibits equilibrium evolutionary dynamics. In contrast, when the average evolutionary speed remains high, the coevolving system tends to exhibit complicated, non-equilibrium dynamics, with the majority of the clusters exhibiting large fluctuations in phenotype space over evolutionary time. An example of such non-equilibrium coevolution is given in the next section. Other measurements include
the position and population size of all clusters in the system,
and the number of “distinct” clusters
separated by a “visible” distance X=0.1. These measurements can also be averaged over time.
1cm
For any given simulation run initiated by step 1 above, steps 2-5 were repeated iteratively until a specified final simulation time is reached, or until evolution comes to a halt, which by our definition occurs when the average evolutionary speed falls below a threshold, v<10^-4.
Our general approach consisted of simulating many different systems according to the above scheme, and then computing statistical characteristics such as the fraction of runs that result in non-equilibrium dynamics, or the average evolutionary speed as a function of the level of diversity (see Results section).
One crucial feature of our algorithm is the periodic generation of new clusters in step 4, which mimics diversification events, i.e., evolutionary branching. Diversification is thus modeled by simply adding new phenotypic clusters at certain points in time and close to existing clusters. This mimics the sympatric split of an ancestral lineage. Sympatric diversification is a theoretically robust phenomenon <cit.> and our procedure represents a shortcut for this phenomenon necessitated by computational feasibility. If such splitting is not feasible given the current ecological circumstance, the new cluster will not diverge phenotypically from the ancestor, and hence will be merged again with the ancestor (see below). Alternatively, newly generated clusters may go extinct ecologically. In either case, speciation was not successful. Thus, in our models it is the ecological circumstances that determine whether speciation can occur or not, but the process of speciation itself (i.e., the splitting) is performed in a simplified manner. If speciation is successful and the newly generated clusters diverge and persist ecologically, then diversity has increased (unless other clusters go extinct). We note that by construction, the maximal level of diversity in a given simulation run, i.e., the number of different clusters, cannot exceed the parameter m_max. Therefore, this parameter allows us to control the level of diversity in a given simulation.
There are in principle other, less artificial ways to model diversification. In particular, stochastic, individual-based based models and partial differential equation models <cit.> have been used to describe the evolutionary dynamics of phenotype distributions. In such models, diversification is an emergent property that is reflected in the formation of new modes in the evolving phenotype distributions. While these techniques are very useful in general, they are currently not computationally feasible for the statistical approach that we employed here, which requires systematic simulation of many different systems. Also, they would not allow for control of the level of diversity, as the number of phenotypic modes would simply be an emergent property of the evolving system. Nevertheless, we have used these alternative techniques to illustrate the robustness of salient results using particular examples. A more detailed description of these techniques is given in the Appendix. Another alternative would be to assume that new clusters (species) are assigned phenotypes that are chosen randomly in phenotype space, rather than close to an existing cluster. This could correspond to immigration of new species into an existing community. However, we would not expect this to affect our main results, because with complicated evolutionary dynamics, the initial phenotypic position of a given cluster becomes irrelevant after some time.
Finally, we note that the merging of clusters (species) is done solely for computational reasons and has no biological meaning (apart from designating organisms that are closely related and phenotypically very close as belonging to the same species). Merging of clusters only occurs shortly after a new cluster is seeded close to an existing one, and only if the new cluster does not diverge from the existing one (i.e., only if the ecological conditions for diversification are not satisfied). If divergence is successful, the clusters will never again get close enough to other clusters to be merged because of the repelling force of frequency-dependent competition. Thus, the only function of merging is to prevent the number of clusters from artificially becoming very large.
§ RESULTS
The parameter that controls the level of diversity in our simulations is m_max, which is the maximal number of different phenotypic clusters allowed to be present at any point in time in an evolving community (see step 4 in the Methods section). Our first result is obtained by allowing this parameter to be very large, so that we can estimate the number of clusters that eventually coexist by simply running the simulations for a long time and recording the number of clusters at which the diversity equilibrates. We denote by M_,d the equilibrium number of clusters for a given phenotypic dimension d and strength of the Gaussian component in the competition kernel (<ref>). We found that such equilibrium level of diversity increases exponentially with the dimension d of phenotype space, and decreases with the strength (Figure 1).
Here and below we assume for simplicity that the _i are the same in all phenotypic directions, _i= for i=1,...,d. In the Appendix we indicate scaling relationships that hold for M_,d as functions of the parameters and d. In general, diversity is only maintained if ≲1, which is roughly the scale of the phenotypic range set by the carrying capacity (<ref>). Only if ≲1, the equilibrium level of diversity increases exponentially with increasing dimension of phenotype space, Figure 1.
Our main results are now obtained based on the observation that by fixing the parameter m_max at a value ≤ M_,d for a given d and , the community will typically evolve to a diversity level m_cluster of approximately m_max. That is, if the diversity is constrained to be below the maximal level of diversity possible for a given set of parameters, then the diversity will typically evolve to the value set by the constraint. Note that this is an “average” statement about many simulations runs, i.e., many different choices of the coefficients b_ij and stochastic realizations of cluster splitting. While some simulation runs will result in a diversity that is lower than m_max (which may reflect an intrinsic state of the system for the given set of coefficients, or a long-living metastable state which has not yet reached its full diversity), most runs will evolve to the level of diversity that is prescribed by this parameter.
This allows us to then assess, for a given level of diversity, the nature of the coevolutionary dynamics that unfolds in communities with that level of diversity. Two paradigmatic examples are shown in Figure 2. We first set the level of diversity m_max=12, which is far below the saturation level M_σ,d for the given system. Starting from very few clusters the diversity quickly evolves to the level set by m_max, and the coexisting clusters then exhibit complicated, non-stationary evolutionary dynamics, with all clusters undergoing sustained and irregular fluctuations in phenotype space (Fig. 2a). This type of complicated dynamics is characterized by average evolutionary speeds v>10^-2. In the same system, but now with a value of m_max that lies above the saturation level M_σ,d, the diversity evolves to the saturation level, at which the community consists of ca. 30 coexisting phenotypic clusters (Fig. 2b). In this saturated state, the average evolutionary speed is much lower than 10^-2, and the community exhibits much more stationary coevolutionary dynamics (that would eventually converge to a coevolutionary equilibrium). Moreover, the saturated community exhibits a characteristic pattern of over-dispersion in phenotype space due to competitive repulsion caused by the Gaussian component of the competition kernel (see also Fig. A1 in the Appendix).
To obtain a more systematic characterization of the coevolutionary dynamics as a function of the diversity of the evolving community, we ran, for a given dimension of phenotype space d and strength of competition , 100 simulations with randomly chosen coefficients b_ij for each m_max=1,...,M, where M is some number that is larger than the saturation level of diversity M_,d. For each run, we recorded the average per capita evolutionary speed v and the number of phenotypic clusters, i.e., the level of diversity, present at the end of 1000 evolutionary time units (averaged over the last 4 time units). We classified the dynamics into equilibrium dynamics if the average speed v was <10^-2, and non-equilibrium dynamics otherwise. As mentioned earlier, this was based on individual inspection of many simulation that ran longer than 1000 time units, which showed that the threshold 10^-2 is a very good indicator of whether the coevolutionary system eventually equilibrates.
Our main results are shown in Figures 3 and 4. The general pattern is that the probability of non-equilibrium dynamics increases as diversity increases from single-cluster communities to communities with a few clusters (Figure 3).
For intermediate diversity, the fraction of non-equilibrium dynamics remains high. For communities with high diversity, the fraction of non-equilibrium dynamics starts to decrease, and almost almost all communities with a diversity close to the saturation level M_,d exhibit equilibrium coevolutionary dynamics.
To illustrates these trends, we give a more detailed account of the average velocities v defined in (<ref>) in the coevolving communities (Fig. 4). It shows that there is an exponential decrease in the average speed as the diversity increases, and that there is a substantial fraction of low-diversity communities that exhibit equilibrium dynamics.
The exact shape of these patterns depends on d and (Figures 3 and 4), but whenever diversification is possible, the overall trend is that non-equilibrium dynamics are most likely at intermediate levels of diversity, and that high levels of diversity tend to generate equilibrium coevolutionary dynamics.
The patterns shown in Figs. 3 and 4 are based on many different simulated communities with different levels of diversity. However, similar patterns can be observed in simulations of single communities as they evolve from low to high diversity, i.e., as they undergo an adaptive radiation. Such a radiation, starting from a single phenotypic cluster, is shown in Fig. 5A.
Over time the evolving community becomes more diverse due to adaptive diversification, and as a consequence the nature of the coevolutionary dynamics of the community changes. In the example shown in Figure 5A, the coevolutionary dynamics are fast for low to intermediate levels diversity, and then slow down as the community acquires more and more species, until eventually the community reaches a coevolutionary equilibrium at the diversity saturation level. Again, the slowdown of the evolutionary speed during an adaptive radiation appears to occur exponentially with an increase in diversity. This can also be seen by running a given community defined by a given set of coefficients b_ij for different values of the parameter m_max, determining the level of diversity possible in the evolving community. The evolutionary speed exponentially decreases with the diversity given by m_max (Fig. 5B). We currently do not have a mechanistic explanation for the exponential decay in evolutionary rates with increasing diversity. It is informative to watch the process of diversification and subsequent evolutionary slowdown unfold dynamically. To verify that the observed dynamical pattern is not an artifact of the adaptive dynamics approximation, we performed the individual-based and partial differential equation simulations of the same system. The movies in Videos in the Appendix, corresponding to the scenario used for Figures 2B and 5A, confirm that all three methods produce qualitatively similar evolutionary pictures. The detailed descriptions of the individual-based and partial differential equation methods are given in the Appendix.
Another interesting, although perhaps not so surprising observation for single adaptive radiations concerns the rate of accumulation of new species in the evolving ecosystem. Figure 5C shows the number of species as a function of time during the adaptive radiation scenario used for Figure 5A, illustrating that the rate of diversification is highest at the beginning of the radiation, and then slows down as the community evolves towards the diversity saturation level. The details of these dynamics depend on system parameters, and in particular on the rate at which new species are introduced into the system, but the qualitative behaviour of diversification rates, which are initially high and then slow down, is common to all adaptive radiations generated by our models.
§ DISCUSSION
We investigated the expected long-term evolutionary dynamics resulting from competition for resources in models for gradual evolution in high-dimensional phenotype spaces. In reality, most organisms have many different phenotypic properties that impinge on their ecological interactions in generally complicated ways, and here we assumed that multi-dimensional phenotypes determine logistic ecological dynamics through the competition kernel and the carrying capacity. We then used a coevolutionary adaptive dynamics algorithm to extend the ecological dynamics to macroevolutionary time scales, and we used a statistical approach to capture general properties of the ensuing evolutionary dynamics.
If the negative frequency-dependence generated by the competition kernel is strong enough, competition results in repeated adaptive diversification, and hence in communities of coevolving phenotypic species. By randomly choosing many different competition kernels, we showed that the complexity of the coevolutionary dynamics in such communities is expected to be highest for intermediate levels of phenotypic diversity. In particular, as the evolving communities increase in diversity towards the saturation level, i.e., the maximal number of different species that can coexist, the evolutionary dynamics becomes simpler, and communities at the saturation level are expected to exhibit a coevolutionary equilibrium. We also showed that the diversity saturation level increases exponentially with the dimension of phenotype space.
We have used a statistical approach to determine the expected long-term evolutionary dynamics resulting from competition for resources. We have assumed that multi-dimensional phenotypes determine logistic ecological dynamics through the competition kernel and the carrying capacity, and we then used a coevolutionary adaptive dynamics algorithm to extend the ecological dynamics to macroevolutionary time scales.
If the negative frequency-dependence generated by the competition kernel is strong enough, competition results in repeated adaptive diversification, and hence in communities of coevolving phenotypic species. By randomly choosing many different competition kernels, we showed that the complexity of the coevolutionary dynamics in such communities is expected to be highest for intermediate levels of phenotypic diversity. In particular, as the evolving communities increase in diversity towards the saturation level, i.e., the maximal number of different species that can coexist, the evolutionary dynamics becomes simpler, and communities at the saturation level are expected to exhibit a coevolutionary equilibrium. We also showed that the diversity saturation level increases exponentially with the dimension of phenotype space.
Our interpretation of these findings is that in low-dimensional phenotype spaces such as the ones considered here, evolutionary dynamics of single species are expected to converge to an equilibrium <cit.>. However, as diversity increases, the different phenotypic clusters will “push” each other around evolutionarily due to frequency-dependent competition. This occurs mostly due to the repulsive nature of pairwise interaction induced by the Gaussian term in the competition kernel (<ref>): clusters that move further apart decrease competition felt from each other. For example, a splitting of a cluster stuck in an attractive fixed point of the adaptive dynamics creates two offspring which may become moving again if the repulsion between clusters is stronger than the attraction of the fixed point. As long as diversity is not very high, i.e., as long as there is enough available niche or unoccupied phenotype space, this typically results in non-equilibrium coevolutionary dynamics, thus leading to an increase in evolutionary complexity with phenotypic diversity. As the diversity keeps increasing towards saturation levels, which for each phenotypic dimension is determined roughly by the ratio of the widths of the carrying capacity and the competition kernel (see Video 2), the available carrying capacity niche gets filled, so that the evolving clusters “have nowhere to go” evolutionarily. An analogy with gas-liquid-solid phase transitions may illustrate this in the following way: As in the dynamics of molecules, the adaptive dynamics of phenotypic clusters contains a pairwise-repulsive term, which originates from the Gaussian term in the competition kernel. A few-cluster regime qualitatively corresponds to the gas phase, when the range of the repulsive interaction is significantly less than the typical distance between clusters. As the number and thus density of clusters increases, the repulsive interaction becomes more relevant, constraining the individual motion of clusters and resulting in a liquid-like behaviour, where clusters are predominantly localized and occasionally hop to a new location. Finally, the maximum cluster density creates a crystal-like structure, albeit not necessarily entirely symmetric due to the randomly generated b_ij terms in the adaptive dynamics. The motion of individual clusters is heavily constrained by its neighbours via mutual repulsion, while the collective motion of an ensemble of clusters is limited by the carrying capacity function.
Thus, phenotypic saturation leads to a state in which the coevolving clusters are strongly constrained evolutionarily by the other clusters in the community, and hence to coevolutionary equilibrium dynamics.
Some empirical support for an initial increase in the complexity of evolutionary dynamics with the number of species in an ecosystem comes from the laboratory evolution experiments of <cit.>, who showed that the speed of adaptation to novel environments is higher in bacterial species that are part of microbial communities with a small number of competitors than when evolving in monoculture. However, our results are seemingly in contrast to previous theoretical results about the effect of diversity on evolutionary dynamics <cit.>. These authors essentially argued that while a single species is free to evolve in response to changes in the environment, evolution of the same species is more constrained in a community of competitors, in which other species are more likely to evolutionarily occupy new niches. Hence diversity is expected to slow down evolution. However, these models only describe evolution in 1-dimensional phenotypes, and may thus miss the complexity arising in higher-dimensional spaces. Moreover, even in higher-dimensional spaces, the arguments for evolutionary slowdown presented in <cit.> essentially correspond to our observation of a slow-down when diversity reaches saturation, at which point evolutionary change in each species is indeed constrained due to competing species occupying all available niches. Our approach also needs to be distinguished from approaches based primarily on ecological dynamics, as in <cit.>. In these approaches, emerging ecological communities are also modelled by periodically adding new species, but there is no underlying phenotype space that would determine competitive interactions. Instead, every time a new species added, its interaction coefficients with the already existing species are chosen according to a specific, randomized procedure. This leads to interesting results, such as saturating levels of diversity after initially fast and fluctuating increases from low levels of diversity. However, since there is no underlying phenotype space, this approach does not reveal the evolutionary dynamics of continuous phenotypes, and in particular, it does not yield any information about the effects of the dimension of phenotype space on the evolutionary dynamics or on the amount of diversity at saturation.
There has been much interest in recent years in determining the effects of phylogenetic relationships on the functioning of ecosystems (e.g. <cit.>). The intuitive notion is that phylogenetic information has predictive power for ecological interactions if recently diverged species are more likely to interact than those that diverged long ago. More specifically, <cit.> have argued that phylogenetic information is most likely to be relevant for ecosystem dynamics if ecological interactions are based on phenotypic matching, so that species with more similar trait values are more likely to interact strongly. Our models have a component of phenotypic matching due to the Gaussian part of the competition kernel, but they also have a strong component of different types of interactions due to the “random” part of the competition kernel given by the coefficients b_ij. As we have shown, it is this non-Gaussian part of the competition kernel that causes the complicated coevolutionary dynamics, and it is this complexity in turn that makes phylogenetic signal largely irrelevant in our models.
A full phylogenetic analysis of the macroevolutionary dynamics generated by our models is beyond the scope of this work, but we can provide some basic insights based on the complicated evolutionary dynamics in phenotype space that the different phenotypic clusters (species) perform when there is an intermediate number of clusters in the coevolving community. An example of this is shown in the movie in Figure A1A. Here, after an initial phase of diversification, the community contains 12 coevolving clusters. These clusters move on a complicated evolutionary trajectory, with each cluster undergoing large evolutionary changes without further diversification. No matter what the phylogenetic relationship between these clusters (as given by their emergence from the single initial cluster), it is clear that because of the large evolutionary fluctuations in phenotype space of each cluster (species), there will be no consistent correlation between phylogenetic relationship and phenotypic distance. Even if there were such a correlation (positive or negative) at a particular point in time, it would change over time due to the large evolutionary fluctuations of each cluster over time. This is illustrated in Figure A1B, which shows that no persistent correlation pattern between phylogenetic and phenotypic distance should be expected in communities with an intermediate amount of diversity. In particular, recently diverged species are not more likely to interact than those diverged less recently, because the evolving community has a short “phenotypic memory” due to complicated evolutionary dynamics.
However, when further diversification is allowed, so that the system reaches its saturation level of diversity, the coevolving community not only becomes more diverse, but the evolutionary dynamics slows down, leading to ever smaller phenotypic fluctuations. In particular, new clusters emerging towards the end of the assembly of the evolutionarily stable community will stay phenotypically closer to their phylogenetically most closely related clusters, i.e., to their parent or sister species. Therefore, in the last phase of community assembly a positive correlation between phylogenetic and phenotypic distance can be expected to build up at least to some extent. This is illustrated in Figure A1B. Thus, weak phylogenetic signals are expected to develop towards the end of community assembly.
Regarding adaptive radiations, two observations emerge from our models. The first concerns the classical notion that rates of diversification should decline over the course of a radiation <cit.>, a pattern that seems to have good empirical support <cit.>. Our models confirm this pattern of declining rates of diversification (Figure 5). The second observation is that rates of evolution should generally slow down with an increase in diversity. This should not only be true when different ecosystems are compared (Figures 3,4), but also during an adaptive radiation in a single evolving community (Figure 5). Thus, we would expect the evolutionary dynamics to be faster and more complicated early in an adaptive radiation, and to slow down and eventually equilibrate late in the radiation. This corresponds to the so-called “early-burst” model of macroevolution <cit.> in the context of adaptive radiations. This model predicts that when lineages enter novel “adaptive zones” <cit.>, such as novel ecological niches, evolutionary rates in the lineage should be fast initially and then slow down as the adaptive zone gets filled with diverse phenotypes. <cit.> found little evidence for the early-burst model when analyzing a large set of data from many different clades. Nevertheless, these authors noted that younger clades have higher rates of evolution than older clades, which points to the fact that evolutionary rates may slow down with clade age. Moreover, few clades in their data set correspond to the type of very fast adaptive radiation envisaged and observed in our models, and they did not consider high-dimensional phenotypes. Finally, <cit.> note that groups with a larger proportion of sympatric species early in their history would be more likely to exhibit an early-burst pattern. In our models, adaptive radiations occur in complete sympatry and indeed produce the early burst pattern.
According to <cit.>, the jury on early-burst models is still out, and in fact substantial evidence for this model has accumulated in recent years. For example, <cit.> reported an early burst in body size evolution in mammals, <cit.> observed an early-burst pattern in the evolution of bill shape during adaptive radiation in seabirds, <cit.> and <cit.> reported early-burst patterns in morphological and functional evolution in cichlids, and <cit.> described patterns of early bursts in the evolution of dinosaur morphology.
<cit.> have incorporated the early-burst concept into a macroevolutionary perspective in which over very long evolutionary time scales, rare but substantial phenotypic bursts alternate with more stationary periods of bounded phenotypic fluctuations, somewhat reminiscent of the concept of punctuated equilibrium <cit.> when applied to rates of phenotypic evolution <cit.>. We think that the models presented here could provide a microevolutionary basis for such a perspective if they are extended by considering evolutionary change in the dimension of the phenotype space that determines ecological interactions. Such an extended theory would have three time scales: a short, ecological time scale, an intermediate time scale at which co-evolution and single diversifications take place in a given phenotype space, and a long time scale at which the number of phenotypic components increases (or decreases). Our hypothesis would then be that in such systems, periods of bounded evolutionary fluctuations near diversity saturation levels for a given dimension of phenotype space would alternate with bursts of rapid evolutionary change, brought about by an evolutionary increase in phenotypic dimensions and the subsequent increase in diversity and acceleration in evolutionary rates until a new saturation level is reached. The resulting long-term evolutionary dynamics would thus show periods of relative phenotypic stasis alternating with periods of fast evolution. This picture would fit very well with the “blunderbass” pattern envisaged in <cit.>. These authors proposed that the intermittent bursts in evolutionary rates are caused by lineages encountering novel “adaptive zones” <cit.>. Novel adaptive zones would correspond to the opening up of new habitats or new resources, which would in turn correspond to new phenotypes that determine use of the novel adaptive zone. Alternatively, novel adaptive zones could also be generated by the emergence of novel sets of regulatory mechanisms allowing novel uses of already existing habitats and resources (as e.g. when a trade-off constraint is overcome through gene duplication). In either case, novel adaptive zones would correspond to an increase in the dimensionality of ecologically important phenotypes.
It is interesting to note that such intermittent burst patterns have in fact been observed in phylogenetic data, and that they seem to be connected to novel, ecologically important phenotypes. <cit.> have shown that evolutionary rates in echinoids reveal at least two instances of rapidly accelerating and subsequently declining evolutionary rates, i.e., two intermittent bursts. Moreover, these bursts appear to be associated with the evolution of novel feeding strategies <cit.>. Also, <cit.> have shown that an evolutionary burst occurs in the dinosaur-bird transition, and it is tempting to conjecture that this burst was caused by the increase in phenotype dimensionality due to the proliferation of flight capabilities.
There is also good empirical support for our finding that the level at which diversity saturates increases with the dimension of phenotype space. <cit.> has argued that essentially, the high number of different ecologically relevant traits is the basis for the spectacular radiations of cichlids in African lakes. In conjunction with ecological opportunity, genetic and phenotypic flexibility, which appears to be at least in part due to gene duplications, has allowed this group of fish to reach a much higher diversity than other groups, such as cichlids in rivers or whitefish in arctic lakes, in which fewer phenotypes appear to be ecologically relevant <cit.>. In this context, we note that incorporating the evolution of the dimension of phenotype space may also shed light on the ongoing debate about whether diversity saturates over evolutionary time or not <cit.>. It seems that the answer could be “yes and no”: diversity saturates in the intermediate term for a given dimension of phenotype space, but does not saturate in the long term if the dimension of phenotype space increases over long evolutionary time scales, thus generating recurrent increases in saturation levels.
Our study has a number of limitations that should be addressed in future research. It is currently impractical to perform the statistical analysis presented here for phenotype spaces with dimensions higher than 4 due to computational limitations. Our results indicate that the diversity saturation level, i.e., the maximal number of coexisting phenotypic clusters, increases rapidly with the dimension d of phenotype space, which makes simulations of communities at saturation levels unfeasible. Nevertheless, we expect the salient result that coevolutionary dynamics slow down as communities reach the saturation level to be true in any dimension as long as the Gaussian component of competition in (<ref>) affects all phenotypic directions. Also, in our approach we have assumed that the phenotypes determining competitive interactions are the same for intra- and inter-specific competition. This may be a fair assumption for closely related species, such as those coevolving in an adaptive radiation. However, for competition in more general ecosystems it may also be relevant to assume that from a total set of d phenotypes, different subsets determine competition within a species and competition with various other species. In addition, to describe general ecosystems and food webs, it will be important to include not just competitive interactions, but also predator-prey and mutualistic interactions, each again determined by potentially high-dimensional phenotypes. Also, throughout we have assumed a simple unimodal form of the carrying capacity to represent the external environment. More complicated forms of the carrying capacity, and hence of the external fitness landscape will likely generate even richer patterns of coevolutionary dynamics and diversification. Finally, we have assumed throughout that evolving populations are well-mixed, and it will be interesting so see how the results generalize to spatially structured ecosystems. All these extensions remain to be developed.
We are of course aware of the fact that we did not include genetic mixing due to sexual reproduction in our models, and our method of describing diversification by simply adding new phenotypic clusters, although fairly standard, does not take into account the actual process of speciation. In sexual populations, adaptive diversification due to disruptive selection, as envisioned here, requires assortative mating, and the conditions for the evolution of various types of assortative mating, as well as for the likelihood of speciation once assortment is present, have been studied extensively (e.g. <cit.>). A general, if crude conclusion from this work is that when there is enough disruptive selection for diversification to occur in asexual models, then it is likely that adaptive speciation also occurs in the corresponding sexual models, although factors such as the strength of assortment, population size and linkage disequilibrium may become important. It would in principle be possible to incorporate sexual reproduction into the models presented here, e.g. along the lines of <cit.>. Our previous results <cit.> indicate that adaptive diversification is generally more likely in high-dimensional phenotype spaces, and we think that the present models serve well as a first approximation to study adaptive diversification and coevolutionary dynamics in evolving communities.
Ultimately, the applicability and relevance of our models for understanding macroevolutionary patterns in nature depends in part on being able to determine evolutionary rates of high-dimensional phenotypes from phylogenetic data, which appears to be a difficult problem <cit.>. Nevertheless, overall we think that our approach of incorporating microevolutionary processes based on ecological interactions in high-dimensional phenotype spaces into statistical models for macroevolutionary dynamics has the potential to shed new light on a number of fundamental conceptual questions in evolutionary biology.
§ ACKNOWLEDGMENTS
M. D. was supported by NSERC (Canada). I. I. was supported by FONDECYT grant 1151524 (Chile). Both authors contributed equally to this work.
§ CORRELATION BETWEEN PHYLOGENETIC AND PHENOTYPIC DISTANCE
For each pair of clusters (species) in an evolving community we define the phylogenic distance between them, Pg, as the number of links in the path between them on the phylogenic tree. To measure this distance, we add the following scheme to our evolutionary algorithm:
* The system is initialized with a single cluster.
* Each cluster splitting event produces two offspring separated by the distance 2. The distance between an offspring and all its existing neighbours is incremented by one.
* When two recently split cluster that failed to diverge are merged, the distance between the newly produced common cluster and each of its neighbours is calculated as the minimum of the distances of the two merged clusters minus one. This reflects the observation that merging events only happen with newly split clusters.
As a result, at any given time we know phylogenic distances between all pairs of clusters currently present in the system. To quantify the relation between the phenotypic and phylogenic similarity, we compute the correlation C between phylogenetic and phenotypic distance as follows:
C=⟨ [Pg - ⟨ Pg ⟩ ][X - ⟨ X ⟩ ] ⟩/σ_Pgσ_X,
where Ph and X are phylogenic and phenotypic distances between clusters, ⟨…⟩ define the average over all pairs of clusters present in the system and σ_Pg and σ_X are the standard deviations of distances.
The above scheme allows us to track the correlation between phylogenetic and phenotypic distance over time, as illustrated in Figure A1. Fig. A1A shows the time dependence of C for the simulation shown in Video 1, and in Fig. A1B shows the time dependence of C for the simulation shown in Video 2. During the early phase of community assembly the correlation C rapidly decays due to complicated coevolutionary dynamics of the emerging clusters. When the diversity of the coevolving community is kept intermediate (by setting the parameter m_C to intermediate values, as in Video 1), the correlation between phylogenetic and phenotypic distance itself undergoes fluctuations around 0 (Fig. A1A). This is because the clusters in the community with intermediate diversity undergo large phenotypic fluctuations while their phylogenetic relationship is constant, because no further diversification (or extinction) occurs. However, when the diversity is allowed to reach saturation levels (by setting m_C to a large value, as in Video 2), a positive correlation between phylogenetic and phenotypic distance develops in the final stages of community assembly, i.e., as the coevolving community reaches the saturation diversity and hence undergoes much smaller phenotypic fluctuations (Fig. A1B). Note that the correlation is still close to 0 during the early stages of community assembly, but some correlation remains at the end due to clusters emerging in the last phase of community assembly, which tend to stay phenotypically closer to their sister species because evolutionary dynamics become slow and stable.
§ INDIVIDUAL-BASED SIMULATIONS
Individual-based realizations of the model
were based on the Gillespie algorithm <cit.>
and consisted of the following steps:
* The system is initialized by creating a set of K_0 ∼ 10^3 - 10^4 individuals with
phenotypes _k∈𝐑^d localized around the initial position _0
with a small random spread |_k - _0|∼10^-3.
* Each individual k has a
constant reproduction rate _̊k=1 and a death rate
_̣k=∑_ l ≠ k
A(_l,_k)/[K_0K(_k)], as defined by the logistic ecological dynamics.
* The total update rate is
given by the sum of all individual rates, U=∑_k
(_̊k+_̣k).
* The running time t is incremented by a random number
t drawn from the exponential distribution P( t)= U exp (- t U).
* A particular birth or death event is randomly chosen with
probability equal to the rate of this event divided by the total update
rate U. If a reproduction event is chosen, the phenotype of an
offspring is offset from the parental phenotype by a
small mutation randomly drawn from a uniform distribution with
amplitude = 10^-3 - 10^-2.
* The individual death rates _̣k and the total update rate
U are updated to take into account the addition or removal of an
individual.
* Steps 4-6 are repeated until t reaches a specified end time.
The movie in Video A2 shows the dynamics of the individual-based model corresponding to the adaptive dynamics simulation shown in Video A1, which is the same as the scenario used for Video 2 in the main text (note that the movie in Video A1 runs for t=1200 time units, whereas the movie in Video 2 runs for t=400 time units).
1cm
1 cm
§ PARTIAL DIFFERENTIAL EQUATION MODELS
A deterministic large-population limit of the individual-based model is obtained as the partial differential equation (PDE)
∂ N(, t)/∂ t = N(, t)( 1 - ∫α(, ) N (, t) dy/K())+D∑_i=1^d ∂^2 N(, t)/∂ x_i^2,
where N(, t) is the population distribution at time t <cit.>. The second term of the right hand side is a diffusion term that describes mutations,
with the diffusion coefficient typically set to D∼ 10^-4 - 10^-3. Local maxima of the solution N(x,t) can be interpreted as positions of the centers of the phenotypic clusters. Their dynamics are shown in Video A3.
For any given scenario, the corresponding adaptive dynamics solution can be used to determine the single- or few-cluster trajectory, and hence to approximately determine the region occupied by the system in phenotype space over time. Note that the deterministic PDE model is invariant with regard to the coordinate change → -, and hence its solutions must be symmetric with regard to simultaneous reflection on all coordinate axes. To numerically solve the PDE model (<ref>)
we chose a lattice noticeably
larger than the corresponding adaptive dynamics attractor. The number of bins B
in each dimension of this lattice is strongly constrained by memory limitations: An
efficient implementation requires computing and storing an array of
B^d× B^d values of the competition kernel α(_i, _j) for the pairwise interactions between
all pairs of sites i and j. With B=25 -30 to achieve a reasonable
spatial resolution, the memory constraint makes the
PDE implementation feasible only for d=2,3.
The movie in Video A3 shows the dynamics of the partial differential equation model corresponding to the scenarios shown in Videos A1 and A2.
§ SCALING RELATIONSHIP FOR THE DIVERSITY AT SATURATION
The number of clusters at the diversity saturation level, M_,d, can be estimated to be proportional to the volume of the available phenotype space with the linear dimension L, divided by the volume occupied by each cluster, which has a typical linear size :
M__a,d≈ C_L^d/^d.
Hence, the following scaling relationships hold:
M__a,d=M__b,d(_b/_a)^d and M_,d_1=M_,d_2^d_1/d_2,
where _a and _b denote different strengths of competition, and C_ is a constant of order 1 that takes into account the “imperfect packing” occurring when and L have similar magnitude.
Based on this, the equilibrium level of diversity is expected to increase exponentially with increasing dimension of phenotype space (as illustrated Figure 1), and with increasing frequency-dependence (i.e., decreasing ). In general, diversity is only maintained if ≲1, which is roughly the scale of the phenotypic range set by the carrying capacity given by eq. (5) in the main text.
§ SPECIFIC SETS OF COEFFICIENTS USED
The following set of coefficients b_ij determining the competition kernel were used for Figures 5A in the main text and for the movies.
[ 0.407 0.498 0.287; -0.199 -1.102 -0.305; 1.387 -0.896 0.341 ]
The following set of coefficients b_ij determining the competition kernel were used for Figure 5B in the main text:
[ -1.289 0.682 0.217 -0.093; -0.223 -0.035 0.697 -0.117; -0.563 0.434 -0.953 -0.198; 0.119 0.398 0.183 0.530 ]
2cm
evolution
|
http://arxiv.org/abs/1701.07941v2 | 20170127044113 | Computationally Efficient Market Simulation Tool for Future Grid Scenario Analysis | [
"Shariq Riaz",
"Gregor Verbic",
"Archie C. Chapman"
] | math.OC | [
"math.OC"
] |
Computationally Efficient Market Simulation Tool for Future Grid Scenario Analysis
Shariq Riaz, Graduate Student Member, IEEE,
Gregor Verbič, Senior Member, IEEE,
and Archie C. Chapman, Member, IEEE
Shariq Riaz, Gregor Verbič and Archie C. Chapman are with the School of Electrical and Information Engineering, The University of Sydney, Sydney, New South Wales, Australia. e-mails: (shariq.riaz, gregor.verbic, archie.chapman@sydney.edu.au).
Shariq Riaz is also with the Department of Electrical Engineering, University of Engineering and Technology Lahore, Lahore, Pakistan.
December 30, 2023
==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper proposes a computationally efficient electricity market simulation tool (MST) suitable for future grid scenario analysis. The market model is based on a unit commitment (UC) problem and takes into account the uptake of emerging technologies, like demand response, battery storage, concentrated solar thermal generation, and HVDC transmission lines. To allow for a subsequent stability assessment, the MST requires an explicit representation of the number of online generation units, which affects powers system inertia and reactive power support capability. These requirements render a full-fledged UC model computationally intractable, so we propose unit clustering, a rolling horizon approach, and constraint clipping to increase the computational efficiency. To showcase the capability of the proposed tool, we use a simplified model of the Australian National Electricity Market with different penetrations of renewable generation. The results
are verified by comparison to a more expressive and computationally-intensive binary UC, which confirm the validity of the approach for long term future grid studies.
Electricity market, future grid, electricity market simulation tool, optimization, scenario analysis, unit commitment, stability assessment, inertia, loadability.
[A01]𝒞Set of consumers c.
[A02]𝒢Set of generators g, 𝒢^ = 𝒢^𝓈𝓎𝓃∪𝒢^ℛℰ𝒮.
[A03]𝒢^𝓈𝓎𝓃Set of synchronous generators, 𝒢^𝓈𝓎𝓃⊆𝒢.
[A04]𝒢^ℛℰ𝒮Set of renewable generators, 𝒢^ℛℰ𝒮⊆𝒢.
[A05]𝒢^𝒞𝒮𝒯Set of concentrated solar thermal generators, 𝒢^𝒞𝒮𝒯⊆𝒢^syn.
[A06]𝒢^𝓇Set of synchronous generators in region r, ⋃_𝒢^𝓇 = 𝒢^.
[A07]ℋSet of sub-horizons h.
[A08]ℒSet of power lines l, ℒ = ℒ^𝒜𝒞∪ℒ^ℋ𝒱𝒟𝒞.
[A09]ℒ^𝒜𝒞Set of AC power lines, ℒ^𝒜𝒞⊆ℒ.
[A10]ℒ^ℋ𝒱𝒟𝒞Set of HVDC power lines, ℒ^ℋ𝒱𝒟𝒞⊆ℒ.
[A11]𝒩Set of nodes n.
[A12]𝒩^𝓇Set of nodes in region r.
[A13]𝒫Set of prosumers p.
[A14]ℛSet of regions r.
[A15]𝒮Set of storage plants s.
[A16]𝒯Set of time slots t.
[D01]s_g,tNumber of online units of generator g, s_g,t∈{0,1} in BUC and s_g,t∈ℤ_+ in MST.
[D02]u_g,tInteger startup status variable of a unit of generator g, u_g,t∈{0,1} in BUC and u_g,t∈ℤ_+ in MST.
[D03]d_g,tInteger shutdown status variable of a unit of generator g, d_g,t∈{0,1} in BUC and d_g,t∈ℤ_+ in MST.
[D04]δ_n,tVoltage angle at node n.
[D05]p_l,t^Power flow on line l.
[D08]Δ p_l,tPower loss on line l.
[D09]p_g,tPower dispatch of generator g.
[D10]p_p,t^g+/-Grid/feed-in power of prosumer p.
[D11]p_s,tPower flow of storage plant s.
[D12]p_p,t^bBattery power flow of prosumer p.
[D13]e_g,tThermal energy stored in TES of generator g ∈𝒢^𝒞𝒮𝒯.
[D14]e_s,tEnergy stored in storage plant s.
[D15]e_p,t^bBattery charge state of prosumer p.
[P]c_g^fix/varFix/variable cost of a unit of generator g.
[P]c_g^su/sdStartup/shutdown cost of a unit of generator g.
[P]p_c,t^Load demand of consumer c.
[P]p_p,t^Load demand of prosumer p.
[P]p_n,t^rPower reserve requirement of node n.
[P]x/xMinimum/maximum limit of variable x.
[P]U_gTotal number of identical units of generator g.
[P]r^+/-_gRamp-up/down rate of a unit of generator g.
[P]τ^u/d_gMinimum up/down time of a unit of generator g.
[P]t̃Time slot offset index.
[P]ΔtTime resolution.
[P]B_lSusceptance of line l.
[P]p_g,t^RESMax. output power of renewable generator g ∈𝒢^RES.
[P]p_g,t^CSTMax. thermal power capture by generator g ∈𝒢^CST.
[P]H_gInertia of a unit of generator g.
[P]S_gMVA rating of a unit of generator g.
[P]H_n,tMinimum synchronous inertia requirement of node n.
[P]η_xEfficiency of component x.
[P]p_p,t^pvAggregated PV power of prosumer p.
[P]λFeed-in price ratio.
[I]ŝ_gNumber of online units of generator g ∈𝒢^syn at start of horizon.
[I]p̂_gPower dispatch of generator g at start of horizon.
[I]û_g,tMinimum number of units of generator g ∈𝒢^syn required to remain online for time t<τ_g^u.
[I]d̂_g,tMinimum number of units of generator g ∈𝒢^syn required to remain offline for time t<τ_g^d.
[I]ê_gEnergy stored in TES of g ∈𝒢^𝒞𝒮𝒯 at start of horizon.
[I]ê_sEnergy stored in storage plant s at start of horizon.
[I]ê_p^bBattery state of charge for prosumer p at start of horizon.
§ INTRODUCTION
Power systems worldwide are moving away from domination by large-scale synchronous generation and passive consumers.
Instead, in future grids[We interpret a future grid to mean the study of national grid type structures with the transformational changes over the long-term out to 2050.] new actors, such as variable renewable energy sources (RES)[For the sake of brevity, by RES we mean “unconventional” renewables like wind and solar, but excluding conventional RES, like hydro, and dispatchable unconventional renewables, like concentrated solar thermal.], price-responsive users equipped with small-scale PV-battery systems (called prosumers), demand response (DR), and energy storage will play an increasingly important role.
Given this, in order for policy makers and power system planners to evaluate the integration of high-penetrations of these new elements into future grids, new simulation tools need to be developed.
Specifically, there is a pressing need to understand the effects of technological change on future grids, in terms of energy balance, stability, security and reliability, over a wide range of highly-uncertain future scenarios.
This is complicated by the inherent and unavoidable uncertainty surrounding the availability, quality and cost of new technologies (e.g. battery or photo-voltaic system costs, or concentrated solar thermal (CST) generation operating characteristics) and the policy choices driving their uptake.
The recent blackout in South Australia <cit.> serves as a reminder that things can go wrong when the uptake of new technologies is not planned carefully.
Future grid planning thus requires a major departure from conventional power system planning, where only a handful of the most critical scenarios are analyzed. To account for a wide range of possible future evolutions, scenario analysis has been proposed in many industries, e.g. in finance and economics <cit.>, and in energy <cit.>. In contradistinction to power system planning, where the aim is to find an optimal transmission and/or generation expansion plan, the aim of scenario analysis is to analyze possible evolution pathways to inform power system planning and policy making. Given the uncertainty associated with long-term projections, the focus of future grid scenario analysis is limited only to the analysis of what is technically possible, although it might also consider an explicit costing <cit.>.
In more detail, existing future grid feasibility studies have shown that the balance between demand and supply can be maintained even with high penetration of RESs by using large-scale storage, flexible generation, and diverse RES technologies <cit.>.
However, they only focus on balancing and use simplified transmission network models (either copper plate or network flow; a notable exception is the Greenpeace pan-European study <cit.> that uses a DC load flow model). This ignores network related issues, which limits these models' applicability for stability assessment.
To the best of our knowledge, the Future Grid Research Program, funded by the Australian Commonwealth Scientific and Industrial Research Organisation (CSIRO) is the first to propose a comprehensive modeling framework for future grid scenario analysis that also includes stability assessment. The aim of the project is to explore possible future pathways for the evolution of the Australian grid out to 2050 by looking beyond simple balancing. To this end, a simulation platform has been proposed in <cit.> that consists of a market model, power flow analysis, and stability assessment, Fig. <ref>. The platform has been used, with additional improvements, to study fast stability scanning <cit.>, inertia <cit.>, modeling of prosumers for market simulation <cit.>, impact of prosumers on voltage stability <cit.>, and power system flexibility using CST <cit.> and battery storage <cit.>.
In order to capture the inter-seasonal variations in the renewable generation, computationally intensive time-series analysis needs to be used.
A major computational bottleneck of the framework is the market simulation.
Within this context, the contribution of this paper is to propose a unified generic market simulation tool (MST) based on a unit commitment (UC) problem suitable for future grid scenario analysis, including stability assessment. The tool incorporates the following key features:
* market structure agnostic modeling framework,
* integration of various types and penetrations of RES and emerging demand-side technologies,
* generic demand model considering the impact of prosumers,
* explicit network representation, including HVDC lines, using a DC power flow model,
* explicit representation of the number of online synchronous generators,
* explicit representation of system inertia and reactive power support capability of synchronous generators,
* computational efficiency with sufficient accuracy.
The presented model builds on our existing research <cit.> and combines all these in a single coherent formulation.
In more detail, to reduce the computational burden, the following techniques are used building on the methods proposed in <cit.>:
* unit clustering,
* rolling horizon approach,
* constraint clipping.
The computational advantages of our proposed model are shown on a simplified 14-generator model of the Australian National Energy Market (NEM) as a test grid <cit.>. Four cases for different RES penetration are run for one to seven days horizon length, and computational metrics are reported. To reflect the accuracy of the proposed MST, system inertia and voltage stability margins are used as a benchmark. In simulations, RES and load traces are taken from the National Transmission Network Developed Plan (NTNDP) data, provided by the Australian Energy Market Operator (AEMO) <cit.>.
The remainder of the paper is organized as follows: Literature review and related work are discussed in Section II, while Section
III details the MST. A detailed description
of the simulation setup is given in Section IV. In Section V results are analyzed and discussed in detail. Finally, Section VI concludes the paper.
§ RELATED WORK
In order to better explain the functional requirements of the proposed MST, we first describe the canonical UC formulation.
An interested reader can find a comprehensive literature survey in <cit.>.
§.§ Canonical Unit Commitment Formulation
The UC problem is an umbrella term for a large class of problems in power system operation and planning whose objective is to schedule and dispatch power generation at minimum cost to meet the anticipated demand, while meeting a set of system-wide constraints. In smart grids, problems with a similar structure arise in the area of energy management, and they are sometimes also called UC <cit.>.
Before deregulation, UC was used in vertically integrated utilities for generation scheduling to minimize production costs. After deregulation, UC has been used by system operators to maximize social welfare, but the underlying optimization model is essentially the same.
Mathematically, UC is a large-scale, nonlinear, mixed-integer optimization problem under uncertainty. With some abuse of notation, the UC optimization problem can be represented in the following compact formulation <cit.>:
minimize_𝐱_c, 𝐱_b f_c(𝐱_c) + f_b(𝐱_b)
subject to g_c(𝐱_c) ≤𝐛
g_b(𝐱_b) ≤𝐜
h_c(𝐱_c) + h_b(𝐱_b)≤𝐝
𝐱_c∈ℝ^+, 𝐱_b∈{ 0,1 }
Due to the time-couplings, the UC problem needs to be solved over a sufficiently long horizon.
The decision vector 𝐱 = {𝐱_c, 𝐱_b} for each time interval consist of continuous and binary variables. The continuous variables, 𝐱_c, include generation dispatch levels, load levels, transmission power flows, storage levels, and transmission voltage magnitudes and phase angles. The binary variables, 𝐱_b, includes scheduling decisions for generation and storage, and logical decisions that ensure consistency of the solution.
The objective (<ref>) captures the total production cost, including fuel costs, start-up costs and shut-down costs.
The constraints include, respectively: dispatch related constraints such as energy balance, reserve requirements, transmission limits, and ramping constraints (<ref>); commitment variables, including minimum up and down, and start-up/shut-down constraints (<ref>); and constraints coupling commitment and dispatch decisions, including minimum and maximum generation capacity constraints (<ref>).
The complexity of the problem stems from the following: (i) certain generation technologies (e.g. coal-fired steam units) require long start-up and shut-down times, which requires a sufficiently long solution horizon; (ii) generators are interconnected, which introduces couplings through the power flow constraints; (iii) on/off decisions introduce a combinatorial structure; (iv) some constraints (e.g. AC load flow constraints) and parameters (e.g. production costs) are non-convex; and (v) the increasing penetration of variable renewable generation and the emergence of demand-side technologies introduce uncertainty.
As a result, a complete UC formulation is computationally intractable, so many approximations and heuristics have been proposed to strike a balance between computational complexity and functional requirements. For example, power flow constraints can be neglected altogether (a copper plate model), can be replaced with simple network flow constraints to represent critical inter-connectors, or, instead of (non-convex) AC, a simplified (linear) DC load flow is used.
§.§ UC Formulations in Existing Future Grid Studies
In operational studies: the nonlinear constraints, e.g. ramping, minimum up/down time (MUDT) and thermal limits are typically linearized; startup and shutdown exponential costs are discretized, and; non-convex and non-differentiable variable cost functions are expressed as piecewise linear function <cit.>. In planning studies, due to long horizon lengths, the UC model is simplified even further. For example: combinatorial structure is avoided by aggregating all the units installed at one location <cit.>; piecewise linear cost functions and constraints are represented by one segment only; some costs (e.g. startup, shutdown and fix costs) are ignored; a deterministic UC with perfect foresight is used, and; non-critical binding constraints are omitted <cit.>[An interested reader can refer to <cit.> for a discussion on binding constraints elimination for generation planning.].
To avoid the computational complexity associated with the mixed integer formulation, a recent work <cit.> has proposed a linear relaxation of the UC formulation for flexibility studies, with an accuracy comparable to the full binary mixed integer linear formulation.
In contrast to operation and planning studies, the computational burden of future grid scenario analysis is even bigger, due to a sheer number of scenarios that need to be analyzed, which requires further simplifications. For example, the Greenpeace study <cit.> uses an optimal power flow for generation dispatch and thus ignores UC decisions. Unlike the Greenpeace study, the Irish All Island Grid Study <cit.> and the European project e-Highway2050 <cit.> ignore load flow constraints altogether, however they do use a rolling horizon UC, with simplifications. The Irish study, for example doesn't put any restriction on the minimum number of online synchronous generators to avoid RES spillage, and the e-Highway2050 study uses a heuristics to include DR. The authors of the e-Highway2050 study, however, acknowledge the size and the complexity of the optimization framework in long term planning, and plan to develop new tools with a simplified network representation <cit.>.
In summary, a UC formulation depends on the scope of the study. Future grid studies that explicitly include stability assessment bring about some specific requirements that are routinely neglected in the existing UC formulations, as discussed next.
§ MARKET SIMULATION TOOL
§.§ Functional Requirements
The focus of our work is stability assessment of future grid scenarios. Thus, MST must produce dispatch decisions that accurately capture the kinetic energy stored in rotating masses (inertia), active power reserves and reactive power support capability of synchronous generators, which all depend upon the number of online units and the respective dispatch levels.
For the sake of illustration, consider a generation plant consisting of three identical (synchronous) thermal units, with the following characteristics: (i) constant terminal voltage of 1pu; (ii) minimum technical limit P_min = 0.4pu; (iii) power factor of 0.8; (iv) maximum excitation limit E_fd^max = 1.5pu; and (v) normalized inertia constant H = 5. We further assume that in the over-excited region, the excitation limit is the binding constraint, as shown in Fig. <ref>. Observe that the maximum reactive power capability depends on the active power generated, and varies between Q_n at P_max = 1pu and Q_max at P_min.
We consider three cases defined by the total active power generation of the plant: (i) 0.8pu, (ii) 1.2pu, and (iii) 1.6pu.
The three scenarios correspond to the rows in Fig. <ref>, which shows the active power dispatch level P, reactive power support capability Q, online active power reserves R, and generator inertia H.
The three columns show feasible solutions for three different UC formulations: all three units are aggregated into one equivalent unit (AGG), standard binary UC (BUC) when each unit is modeled individually, and the proposed market simulation tool (MST). A detailed comparison of the three formulations is given in Section V.
Although the results are self-explanatory, a few things are worth emphasizing. In case (i), aggregating the units into one equivalent unit (AGG) results in the unit being shut down due to the minimum technical limit. The individual unit representation (BUC), on the other hand, does allow the dispatch of one or two units, but with significantly different operational characteristics. In cases (ii) and (iii), the total inertia in the AGG formulation is much higher, which has important implications for frequency stability. A similar observation can be made for the reactive power support capability, which affects voltage stability. Also, dispatching power from all three units results in a significantly higher active power reserve. And last, a higher reactive power generation due to a lower P reduces the internal machine angle, which improves transient stability.
In conclusion, a faithful representation of the number of online synchronous machines is of vital importance for stability assessment. An individual unit representation, however, is computationally expensive, so the computational burden should be reduced, as discussed in the following section. Next, an explicit network representation is required. An AC load flow formulation, however, is nonlinear (and non-convex), which results in an intractable mixed-integer nonlinear problem. Therefore, we use a DC load flow representation with a sufficiently small voltage angle difference on transmission lines. Our experience shows that an angle difference of 30 results in a manageable small number of infeasible operating conditions that can be dealt with separately.
§.§ Computational Speedup
The MST is based on the UC formulation using constant fixed, startup, shutdown and production costs. To improve its computational efficiency, the dimensionality of the optimization problem is reduced employing: (i) unit clustering <cit.> to reduce the number of variables needed to represent a multi-unit generation plant; (ii) a rolling horizon approach <cit.> to reduce the time dimension; and (iii) constraint clipping to remove most non-binding constraints.
§.§.§ Unit Clustering
Linearized UC models are computationally efficient for horizons of up to a few days, which makes them extremely useful for operational studies. For planning studies, however, where horizon lengths can be up to a year, or more, these models are still computationally too expensive. Our work builds on the clustering approach proposed in <cit.>, where identical units at each generation plant are aggregated by replacing binary variables with fewer integer variables. The status of online units, startup/shutdown decisions and dispatched power are tracked by three integer variables and one continuous variable per plant per period, as opposed to three binary and one continuous variable per unit per period. Further clustering proposed in <cit.> is not possible in our formulation because of the explicit network representation required in the MST.
§.§.§ Rolling Horizon
Solving the UC as one block, especially for long horizons, is computationally too expensive. This can be overcome by breaking the problem into several smaller intervals called sub-horizons <cit.>. To ensure accuracy and consistency of the solution, a proper overlap between sub-horizons is maintained and the terminating state of the previous sub-horizon is used as the initial condition of the next sub-horizon. The minimum sub-horizon length depends on the time constants associated with the decision variables. While these might be in the order of hours for thermal power plants, they can be significantly longer for energy storage. Large-scale hydro dams, for example, require horizon lengths of several weeks, or even months. In our research, however, the sub-horizon length is up to a few days to cater for thermal energy storage (TES) of CST plants and battery storage. The optimization of hydro dams is not explicitly considered, however it can be taken into account heuristically, if needed.
§.§.§ Constraint Clipping
The size of the problem can be reduced by removing non-binding constraints, which doesn't affect the feasible region. For instance, an MUDT constraint on a unit with an MUDT less than the time interval is redundant[This is especially the case when the time resolution is coarse. In our studies, the time step is one hour. In operational studies, where the resolution can be as short as five minutes, constraint clipping is less useful.]. Similarly, a ramp constraint for flexible units is redundant if the time step is sufficiently long. With a higher RES penetration, in particular, where backup generation is provided by fast-ramping gas turbines, this technique can significantly reduce the size of the optimization problem, and hence improves the computational performance due to a larger number of units with higher ramp rates and smaller MUDTs. It should be noted that optimization pre-solvers might not able to automatically remove these constraints.
§.§ MST UC Formulation
§.§.§ Objective function
The objective of the proposed MST is to minimize total generation cost for all sub-horizons h:
Ωminimize ∑_t∈𝒯^∑_g∈𝒢^( c_g^fix s_g,t
+c_g^su u_g,t + c_g^sd d_g,t +c_g^var p_g,t),
where Ω = {s_g,t,u_g,t,d_g,t,p_g,t, p_s,t, p_l,t} are the decision variables of the problem, and c_g^fix, c_g^su, c_g^sd, and c_g^var are fixed, startup, shutdown and variable cost, respectively.
As typically done in planning studies <cit.>, <cit.>, the costs are assumed constant to reduce the computational complexity. The framework, however, also admits a piece-wise linear approximation proposed in <cit.>.
§.§.§ System Constraints
System Constraints[All the constraints must be satisfied in all time slots t, however, for sake of notational brevity, this is not explicitly mentioned.] include power balance constraints, power reserve and minimum synchronous inertia requirements.
Power balance:
Power generated at node n must be equal to the node power demand plus the net power flow on transmission lines connected to the node:
∑_g∈𝒢_n^p_g,t =
∑_c ∈𝒞_np_c,t^ + ∑_p ∈𝒫_n p_p,t^g+ - ∑_p ∈𝒫_n p_p,t^g- + ∑_s ∈𝒮_np_s,t + ∑_l ∈ℒ_n(p_l,t +Δ p_l,t),
where 𝒢_n, 𝒞_n, 𝒫_n, 𝒮_n, ℒ_n represent respectively the set of generators, consumers, prosumers[Price-responsive users equipped with small-scale PV-battery systems.], utility storage plants and lines connected to node n.
Power reserves:
To cater for uncertainties, active power reserves provided by synchronous generation g ∈𝒢^syn are maintained in each region r:
∑_g ∈{ (𝒢^syn-𝒢^CST) ∩𝒢^r} (p_g s_g,t - p_g,t) +
∑_g ∈{𝒢^CST∩𝒢^r}min(p_g s_g,t - p_g,t,e_g,t-p_g,t) ≥∑_n ∈𝒩_r^p_n,t^r.
For synchronous generators other than concentrated solar thermal (CST), reserves are defined as the difference between the online capacity and the current operating point. For CST, reserves can either be limited by their online capacity or energy level of their thermal energy system (TES).
Variable s_g,t in (<ref>) represents the total number of online units at each generation plant, and 𝒢^r and 𝒩_r represent the sets of generators and nodes in region r, respectively.
Minimum synchronous inertia requirement:
To ensure frequency stability, a minimum level of inertia provided by synchronous generation must be maintained at all times (more details are available in <cit.>) in each region r:
∑_g ∈{𝒢^syn∩𝒢^r}^ s_g,tH_g S_g ≥∑_n ∈𝒩_r^H_n,t.
§.§.§ Network constraints
Network constraints include DC power flow constraints and thermal line limits for AC lines, and active power limits for HVDC lines.
Line power constraints:
A DC load flow model is used for computational simplicity for AC transmission lines[A sufficiently small (∼30) voltage angle difference over a transmission line is used to reduce the number of nonconvergent AC power flow cases.]:
p_l,t^x,y = B_l(δ_x,t - δ_y,t), l ∈ℒ^𝒜𝒞,
where the variables δ_x,t and δ_y,t represent voltage angles at nodes x ∈𝒩 and y ∈𝒩, respectively.
Thermal line limits:
Power flows on all transmission lines are limited by the respective thermal limits of line l:
| p_l,t|≤p_l,
where p_l represents the thermal limit of line l.
§.§.§ Generation constraints
Generation constraints include physical limits of individual generation units.
For the binary unit commitment (BUC), we adopted a UC formulation requiring three binary variables per time slot (on/off status, startup, shutdown) to model an individual unit. In the MST, identical units of a plant are clustered into one individual unit <cit.>.
This requires three integer variables (on/of status, startup, and shutdown) per generation plant per time slot as opposed to three binary variables per generation unit per time slot in the BUC, as discussed in Section III.B of A Computationally Efficient Market Model for Future Grid Scenario Studies.
Generation limits:
Dispatch levels of a synchronous generator g are limited by the respective stable operating limits:
s_g,tp_g≤ p_g,t≤ s_g,tp_g, g ∈𝒢^syn.
The power of RES[For the sake of brevity, by RES we mean “unconventional” renewables like wind and solar, but excluding conventional RES, like hydro, and dispatchable unconventional renewables, like concentrated solar thermal.] generation is limited by the availability of the corresponding renewable resource (wind or sun):
s_g,tp_g≤ p_g,t≤ s_g,tp_g,t^RES, g ∈𝒢^RES.
Unit on/off constraints:
A unit can only be turned on if and only if it is in off state and vice versa:
u_g,t-d_g,t=s_g,t-s_g,t-1, t ≠ 1, g ∈𝒢^syn.
In a rolling horizon approach, consistency between adjacent time slots is ensured by:
u_g,t-d_g,t=s_g,t - ŝ_g, t =1, g ∈𝒢^syn,
where ŝ_g is the initial number of online units of generator g. Equations (<ref>) and (<ref>) also implicitly determine the upper bound of u_g,t and d_g,t in terms of changes in s_g,t.
Number of online units:
Unlike the BUC, the MST requires an explicit upper bound on status variables:
s_g,t≤U_g.
Ramp-up and ramp-down limits:
Ramp rates of synchronous generation should be kept within the respective ramp-up (<ref>), (<ref>) and ramp-down limits (<ref>), (<ref>):
p_g,t - p_g,t-1≤ s_g,tr^+_g, t ≠ 1, g ∈{𝒢^syn | r^+_g < p_g},
p_g,t - p̂_g≤ s_g,tr^+_g, t =1, g ∈{𝒢^syn | r^+_g < p_g},
p̂_g - p_g,t≤ s_g,t-1r^-_g, t ≠ 1, g ∈{𝒢^syn | r^-_g < p_g},
p̂_g - p_g,t≤ŝ_gr^-_g, t =1, g ∈{𝒢^syn | r^-_g < p_g}.
In the MST, a ramp limit of a power plant is defined as a product of the ramp limit of an individual unit and the number of online units in a power plant s_g,t. If s_g,t is binary, these ramp constraints are mathematically identical to ramp constraints of the BUC.
If a ramp rate multiplied by the length of the time resolution Δt is less than the rated power, the rate limit has no effect on the dispatch, so the corresponding constraint can be eliminated.
Constraints explicitly defined for t=1 are used to join two adjacent sub-horizons in the rolling-horizon approach.
Minimum up and down times:
Steam generators must remain on for a period of time τ_g^u once turned on (minimum up time):
s_g,t≥∑_t̃=τ_g^u-1^0 u_g,t-t̃, t ≥τ_g^u, g ∈{𝒢^syn | τ_g^u > Δt},
s_g,t≥∑_t̃=t-1^0u_g,t-t̃ + û_g,t, t < τ_g^u, g ∈{𝒢^syn | τ_g^u > Δt}.
Similarly, they must not be turned on for a period of time τ_g^d once turned off (minimum down time):
s_g,t≤U_g - ∑_t̃=τ_g^d-1^0 d_g,t-t̃, t ≥τ_g^d, g ∈{𝒢^syn | τ_g^d > Δt},
s_g,t≤U_g - ∑_t̃=t-1^0d_g,t-t̃ - d̂_g,t, t < τ_g^d, g ∈{𝒢^syn | τ_g^d > Δt}.
Similar to the rate limits, if the minimum up and down times are smaller than the time resolution Δt, the corresponding constraints can be eliminated.
Due to integer nature of discrete variables in the MST, the definition of the MUDT constraints in the RH approach requires the number of online units for the last τ^u/d time interval to establish the relationship between the adjacent sub-horizons. If the τ_g^u/d is smaller than time resolution Δt, then these constraints can be eliminated.
§.§.§ CST constraints:
CST constraints include TES energy balance and storage limits.
TES state of charge (SOC)
determines the TES energy balance subject to the accumulated energy in the previous time slot, thermal losses, thermal power provided by the solar farm and electrical power dispatched from the CST plant:
e_g,t=η_ge_g,t-1+p_g,t^CST-p_g,t, t ≠ 1, g ∈𝒢^CST,
e_g,t=η_gê_g+p_g,t^CST-p_g,t, t=1, g ∈𝒢^CST,
where, p_g,t^CST is the thermal power collected by the solar field of generator g ∈𝒢^CST.
TES limits: Energy stored is limited by the capacity of a storage tank:
e_g≤e_g,t≤e_g, g ∈𝒢^CST.
§.§.§ Utility storage constraints
Utility-scale storage constraints include energy balance, storage capacity limits and power flow constraints. The formulation is generic and can capture a wide range of storage technologies.
Utility storage SOC limits determine the energy balance of storage plant s:
e_s,t=η_se_s,t-1+p_s,t, t ≠ 1,
e_s,t=η_sê_s+p_s,t, t=1.
Utility storage capacity limits:
Energy stored is limited by the capacity of storage plant s:
e_s≤e_s,t≤e_s.
Charge/discharge rates limit the charge and discharge powers of storage plant s:
p_s^- ≤p_s,t≤p_s^+,
where p_s^- and p_s^+ represent the maximum power discharge and charge rates of a storage plant, respectively.
§.§.§ Prosumer sub-problem
The prosumer sub-problem captures the aggregated effect of prosumers. It is modeled using a bi-level framework in which the upper-level unit commitment problem described above minimizes the total generation cost, and the lower-level problem maximizes prosumers' self-consumption. The coupling is through the prosumers' demand, not through the electricity price, which renders the proposed model market structure agnostic. As such, it implicitly assumes a mechanism for demand response aggregation. The Karush-Kuhn-Tucker optimality conditions of the lower-level problem are added as the constraints to the upper-level problem, which reduces the problem to a single mixed integer linear program.
The model makes the following assumptions: (i) the loads are modeled as price anticipators; (ii) the demand model representing an aggregator consists of a large population of prosumers connected to an unconstrained distribution network who collectively maximize self-consumption; (iii) aggregators do not alter the underlying power consumption of the prosumers; and (iv) prosumers have smart meters equipped with home energy management systems for scheduling of the PV-battery systems, and, a communication infrastructure is assumed that allows a two-way communication between the grid, the aggregator and the prosumers. More details can be found in <cit.>.
Prosumer Objective function:
Prosumers aim to minimize electricity expenditure:
p_p^g+/–, p_p^bminimize∑_t∈𝒯^ p_p,t^g+ - λ p_p,t^g-,
where λ is the applicable feed-in price ratio. In our research, we assumed λ = 0, which corresponds to maximization of self-consumption.
The prosumer sub-problem is subject to the following constraints:
Prosumer power balance:
Electrical consumption of prosumer p, consisting of grid feed-in power, p_p,t^g-, underlying consumption, p_p,t^, and battery charging power, p_p,t^b, is equal to the power taken from the grid, p_p,t^g+, plus the power generated by the PV system, p_p,t^pv:
p_p,t^g+ + p_p,t^pv = p_p,t^g- + p_p,t^ + p_p,t^b.
Battery charge/discharge limits:
Battery power should not exceed the charge/discharge limits:
p_p^b-≤p_p,t^b≤p_p^b+,
where p_b^- and p_b^+ represent the maximum power discharge and charge rates of the prosumer's battery, respectively.
Battery storage capacity limits:
Energy stored in a battery of prosumer p should always be less than its capacity:
e_p^b≤e_p,t^b≤e_p^b.
Battery SOC limits:
Battery SOC is the sum of the power inflow and the SOC in the previous period:
e_p,t^b = η_p^be_p,t^b + p_p,t^b, t ≠ 1,
e_p,t^b = η_p^bê_p^b + p_p,t^b, t=1,
where ê_p^b represents the initial SOC and is used to establish the connection between adjacent sub-horizons.
§ SIMULATION SETUP
The case studies provided in this section compare the computational efficiency of the proposed MST with alternative formulations. For detailed studies on the impact of different technologies on future grids, an interested reader can refer to our previous work <cit.>.
§.§ Test System
We use a modified 14-generator IEEE test system that was initially proposed in <cit.> as a test bed for small-signal analysis. The system is loosely based on the Australian National Electricity Market (NEM), the interconnection on the Australian eastern seaboard. The network is stringy, with large transmission distances and loads concentrated in a few load centres. Generation, demand and the transmission network were modified to meet future load requirements. The modified model consists of 79 buses grouped into four regions, 101 units installed at 14 generation plants and 810 transmission lines.
§.§ Test Cases
To expose the limitations of the different UC formulations, we have selected a typical week with sufficiently varying operating conditions.
Four diverse test cases with different RES penetrations are considered.
First, RES0 considers only conventional generation, including hydro, black coal, brown coal, combined cycle gas and open cycle gas. The generation mix consists of 2.31 hydro, 39.35 of coal and 5.16 of gas, with the peak load of 36.5. To cater for demand and generation variations, 10 reserves are maintained at all times. The generators are assumed to bid at their respective short run marginal costs, based on regional fuel prices <cit.>.
Cases RES30, RES50, RES75 consider, respectively, 30, 50 and 75 annual energy RES penetration, supplied by wind, PV and CST. Normalized power traces for PV, CST and wind farms (WFs) for the 16-zones of the NEM are taken from the AEMO's planning document <cit.>. The locations of RESs are loosely based on the AEMO's 100% RES study <cit.>.
§.§ Modeling Assumptions
Power traces of all PV modules and wind turbines at one plant are aggregated and represented by a single generator. This is a reasonable assumption given that PV and WF don't provide active power reserves, and are not limited by ramp rates, MUDT, and startup and shutdown costs, which renders the information on the number of online units unnecessary.
Also worth mentioning is that RES can be modeled as negative demand, which can lead to an infeasible solution. Modeling RES (wind and solar PV) as negative demand is namely identical to preventing RES from spilling energy. Given the high RES penetration in future grids, we model RES explicitly as individual generators.
Unlike solar PV and wind, CST requires a different modeling approach. Given that CST is synchronous generation it also contributes to spinning reserves and system inertia. Therefore, the number of online units in a CST plant needs to be modeled explicitly.
An optimality gap of 1% was used for all test cases. Simulation were run on Dell OPTIPLEX 9020 desktop computer with Intel(R) Core(TM) i7-4770 CPU with 3.40 clock speed and 16B RAM.
§ RESULTS AND DISCUSSION
To showcase the computational efficiency of the proposed MST, we first benchmark its performance for different horizon lengths against the BUC formulation employing three binary variables per unit per time slot and the AGG formulation where identical units at each plant are aggregated into a single unit, which requires three binary variables per plant per time slot.
We pay particular attention to the techniques used for computational speedup, namely unit clustering, rolling horizon, and constraint clipping. Last, we compare the results of the proposed MST with BUC and AGG formulations for voltage and frequency stability studies.
§.§ Binary Unit Commitment (BUC)
We first run the BUC for horizon lengths varying from one to seven days, Fig. <ref> (top).
As expected, with the increase in the horizon length, the solution time increases exponentially. For a seven-day horizon, the solution time is as high as 25000 (7). Observe how the computational burden is highly dependent on the RES penetration. The variability of the RES results in an increased cycling of the conventional thermal fleet, which increases the number of on/off decisions and, consequently the computational burden. In addition to that, a higher RES penetration involves an increased operation of CST. This poses an additional computational burden due to the decision variables associated with TES that span several time slots.
In summary, the computational burden of the BUC renders it inappropriate for scenario analysis involving extended horizons.
§.§ Aggregated Formulation (AGG)
Aggregating identical units at a power plant into a single unit results in a smaller number of binary variables, which should in principle reduce the computational complexity.
Fig. <ref> confirms that this is mostly true, however, for RES50-HL7 the computation time is higher than in the BUC formulation. The reason for that is that, in this particular case, the BUC formulation has a tighter relaxation than the AGG formulation and, consequently, a smaller root node gap. Compared to the MST formulation, with a similar number of variables than the AGG formulation, the MST has considerably shorter computation time due to a smaller root node gap.
In terms of accuracy, the AGG formulation works well for balancing studies <cit.>. On the other hand, the number of online synchronous generators in the dispatch differs significantly from the BUC, which negatively affects the accuracy of voltage and frequency stability analysis, as shown later. Due to a large number of online units in a particular scenario, a direct comparison of dispatch levels and reserves from each generator is difficult. Therefore, we compare the total number of online synchronous generators, which serves as a proxy to the available system inertia. Fig. <ref> shows the number of online generators of four different RES penetration levels for a horizon length of seven days. For most of the hours there is a significant difference between the number of online units obtained from the BUC and the AGG formulation.
In conclusion, despite its computational advantages, the AGG formulation is not appropriate for stability studies due to large variations in the number of online synchronous units in the dispatch results. In addition to that, the computational time is comparable to the BUC in some cases.
We now evaluate the effectiveness of the techniques for the computational speedup.
§.§.§ Unit Clustering
In unit clustering, binary variables associated with the generation unit constraints are replaced with a smaller number of integer variables, which allows aggregating several identical units into one equivalent unit, but with the number of online units retained. This results in a significant reduction in the number of variables and, consequently, in the computational speedup. Compared to the BUC, the number of variables in the MST with this technique alone reduces from 24649 to 5990 for RES75 with a horizon length of seven days. Therefore, the solution time for RES75-HL7 reduces from 25000 in the BUC to 450 in MST with unit clustering alone.
§.§.§ Rolling Horizon Approach
A rolling horizon approach splits the UC problem into shorter horizons. Given the exponential relationship between the computational burden and the horizon length, as discussed in Section <ref>, solving the problem in a number of smaller chunks instead of in one block results in a significant computational speedup. The accuracy and the consistency of the solution are maintained by having an appropriate overlap between the adjacent horizons. However, the overlap depends on the time constants of the problem. Long term storage, for example, might require longer solution horizons. The solution times for different RES penetrations are shown in Table <ref>. Observe that in the RES75 case, the effect of rolling horizon is much more pronounced, which confirms the validity of the approach for studies with high RES penetration.
§.§.§ Constraint Clipping
Eliminating non binding constraints can speedup the computation even further. Table <ref> shows the number of constraints for different scenarios with and without constraint clipping. Observe that the number of redundant constraints is higher in scenarios with a higher RES penetration. The reason is that a higher RES penetration requires more flexible gas generation with ramp rates shorter than the time resolution (one hour in our case). Note that the benefit of constraint clipping with a shorter time resolution will be smaller.
§.§ MST Computation Time and Accuracy
The proposed MST outperforms the BUC and AGG in terms of the computational time by several orders of magnitude, as shown in Fig. <ref> (bottom). The difference is more pronounced at higher RES penetration levels. For RES75, the MST is more than 500 times faster than the BUC. In terms of the accuracy, the MST results are almost indistinguishable from the BUC results, as evident from Fig. <ref> that shows the number of online synchronous units for different RES penetration levels. Minor differences in the results stem from the nature of the optimization problem. Due to its mixed-integer structure, the problem is non-convex and has therefore several local optima. Given that the BUC and the MST are mathematically not equivalent, the respective solutions might not be exactly the same. The results are nevertheless very close, which confirms the validity of the approach for the purpose of scenario analysis. The loadability and inertia results presented later further support this conclusion.
§.§ Stability Assessment
To showcase the applicability of the MST for stability assessment, we analyze system inertia and loadability that serve as a proxy to frequency and voltage stability, respectively. More detailed stability studies are covered in our previous work, including small-signal stability <cit.>, frequency stability <cit.>, and voltage stability <cit.>.
§.§.§ System inertia
Fig. <ref> (bottom) shows the system inertia for the BUC, AGG and the proposed MST, respectively, for RES0. Given that the inertia is the dominant factor in the frequency response of a system after a major disturbance, the minuscule difference between the BUC and the MST observed in Fig. <ref> validates the suitability of the MST for frequency stability assessment. The inertia captured by the AGG, on the other hand, is either over or under estimated and so does not provide a reliable basis for frequency stability assessment.
§.§.§ Loadability Analysis
The dispatch results from the MST are used to calculate power flows, which are then used in loadability analysis[The loadability analysis is performed by uniformly increasing the load in the system until the load flow fails to converge. The loadability margin is calculated as the difference between the base system load and the load in the last convergent load flow iteration.]. Fig. <ref> (top) shows loadability margins for the RES0 scenario for different UC formulations. Observe that the BUC and the MST produce very similar results. The AGG formulation, on the other hand, gives significantly different results. From hours 95 to 150, in particular, the AGG results show that the system is unstable most of the time, which is in direct contradiction to the accurate BUC formulation.
Compared to the inertia analysis, the differences between the formulations are much more pronounced.
Unlike voltage, frequency is a system variable, which means that it is uniform across the system. In addition to that, inertia only depends on the number of online units but not on their dispatch levels.
Voltage stability, on the other hand, is highly sensitive both to the number of online units and their dispatch levels, which affects the available reactive power support capability, as illustrated in Fig. <ref>.
Close to the voltage stability limit, the system becomes highly nonlinear, so even small variations in dispatch results can significantly change the power flows and, consequently, voltage stability of the system. One can argue that in comparison to BUC the proposed MST result in the more conservative loadability margin, although this is not always the case (around hour 85, the MST is less conservative).
§ CONCLUSION
This paper has proposed a computationally efficient electricity market simulation tool based on a UC problem suitable for future grid scenario analysis. The proposed UC formulation includes an explicit network representation and accounts for the uptake of emerging demand side technologies in a unified generic framework while allowing for a subsequent stability assessment. We have shown that unit aggregation, used in conventional planning-type UC formulations to achieve computational speedup, fails to properly capture the system inertia and reactive power support capability, which is crucial for stability assessment. To address this shortcoming, we have proposed a UC formulation that models the number of online generation units explicitly and is amenable to a computationally expensive time-series analysis required in future grid scenario analysis. To achieve further speedup, we use a rolling horizon approach and constraint clipping.
The effectiveness of the computational speedup techniques depends on the problem structure and the technologies involved so the results cannot be readily generalized. The computational speedup varies between 20 to more than 500 times, for a zero and 75% RES penetration, respectively, which can be explained by a more frequent cycling of the conventional thermal units in the high-RES case. The simulation results have shown that the computational speedup doesn't jeopardize the accuracy. Both the number of online units that serves as a proxy for the system inertia and the loadability results are in close agreement with more detailed UC formulations, which confirms the validity of the approach for long term future grid studies, where one is more interested in finding weak points in the system rather than in a detailed analysis of an individual operating condition.
IEEEtran
|
http://arxiv.org/abs/1701.07493v1 | 20170125213332 | Ballistic, diffusive, and arrested transport in disordered momentum-space lattices | [
"Fangzhao Alex An",
"Eric J. Meier",
"Bryce Gadway"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas",
"physics.atom-ph",
"quant-ph"
] |
bgadway@illinois.edu
Department of Physics, University of Illinois at Urbana-Champaign, Urbana, IL 61801-3080, USA
Ultracold atoms in optical lattices offer a unique platform for investigating disorder-driven phenomena.
While static disordered site potentials have been explored in a number of optical lattice experiments, a more general control over site-energy and off-diagonal tunneling disorder has been lacking.
The use of atomic quantum states as “synthetic dimensions” has introduced the spectroscopic, site-resolved control necessary to engineer new, more tailored realizations of disorder.
Here, by controlling laser-driven dynamics of atomic population in a momentum-space lattice, we extend the range of synthetic-dimension-based quantum simulation and present the first explorations of dynamical disorder and tunneling disorder in an atomic system.
By applying static tunneling phase disorder to a one-dimensional lattice, we observe ballistic quantum spreading as in the case of uniform tunneling.
When the applied disorder fluctuates on timescales comparable to intersite tunneling, we instead observe diffusive atomic transport, signaling a crossover from quantum to classical expansion dynamics.
We compare these observations to the case of static site-energy disorder, where we directly observe quantum localization in the momentum-space lattice.
Ballistic, diffusive, and arrested transport in disordered momentum-space lattices
Bryce Gadway
December 30, 2023
==================================================================================
Over the past two decades, dilute atomic gases have become a fertile testing ground for the study of localization phenomena in disordered quantum systems <cit.>. They have allowed for some of the earliest and most comprehensive studies of Anderson localization of quantum particles <cit.>, strongly interacting disordered matter <cit.>, and many-body localization <cit.>.
Still, the emulation of many types of disorder relevant to real systems - e.g., crystal strain and dislocation, site vacancies, interstitial and substitutional defects, magnetic disorder, and thermal phonons - will require new types of control that go beyond traditional methods based on static disorder potentials <cit.>.
The recent advent of using atomic quantum states as synthetic dimensions has broadened the cold atom toolkit with the spectroscopic, site-resolved control of field-driven transitions <cit.>. This technique has aided the study of synthetic gauge fields <cit.>, and its spatial and dynamical control offers a prime way to implement specifically tailored, dynamical realizations of disorder that would otherwise be difficult to study. However, current studies based on internal states <cit.> have been limited to a small number of sites along the synthetic dimension, inhibiting the study of quantum localization in the presence of disorder.
Here, we employ our recently-developed technique of momentum-space lattices <cit.> to perform the first studies of tailored and dynamical disorder in synthetic dimensions.
Our approach introduces several key advances to cold atom studies of disorder: the achievement of pure off-diagonal tunneling disorder, the dynamical variation of disorder, and site-resolved detection of populations in a disordered system.
For the case of tunneling disorder, we examine the scenario in which only the phase of tunneling is disordered. As expected for a one-dimensional system with only nearest-neighbor tunneling, these random tunneling phases are of zero consequence when applied in a static manner.
When this phase disorder fluctuates on timescales comparable to intersite tunneling, however, we observe a crossover from ballistic to diffusive transport <cit.>.
We compare to the case of static site-energy disorder, observing Anderson localization at the site-resolved level.
Our bottom-up approach <cit.> to Hamiltonian engineering is based on the coherent coupling of atomic momentum states to form an effective synthetic lattice of sites in momentum space (see Fig. <ref>).
This approach may be viewed as studying transport in an artificial dimension <cit.> of discrete spatial eigenstates <cit.> (as opposed to a bounded set of atomic internal states <cit.>) through resonant or near-resonant field-driven transitions.
Starting with ^87Rb Bose-Einstein condensates of ∼5 × 10^4 atoms, we initiate dynamics between 21 discrete momentum states by applying sets of counter-propagating far-detuned laser fields (wavelength λ = 1064 nm, wavevector k = 2π/λ), specifically detuned to address multiple two-photon Bragg transitions, as depicted in Fig. FIG:fig1(a-b).
Our spectrally-resolved control of the individual Bragg transitions permits a local control of the system parameters, similar to that found in photonic simulators <cit.>.
Unique to our
implementation
is the direct and arbitrary control of tunneling phases <cit.>, and the realized tight-binding model is depicted in Fig. FIG:fig1(c). Here, we use this capability to explore the dynamics of cold atoms subject to disordered and dynamical arrangements of tunneling elements.
Specifically, we explore disorder arising purely in the phase of nearest-neighbor tunneling elements. In higher dimensions, such disordered tunneling phases would give rise to random flux patterns that mimic the physics of charged particles in a random magnetic field <cit.>. In 1D, however, the absence of closed tunneling paths renders any static arrangement of tunneling phases inconsequential to the dynamical and equilibrium properties of the particle density. Time-varying phases, however, can have a nontrivial influence on the system's dynamical evolution.
We engineer annealed, or dynamically varying, disorder of the tunneling phases and study its influence through the atoms' nonequilibrium dynamics following a tunneling quench. Our experiments begin with all population restricted to a single momentum state (site). We suddenly turn on the Bragg laser fields, quenching on the (in general) time-dependent effective Hamiltonian
Ĥ(τ) ≈ -t ∑_n (e^i φ_n (τ)ĉ^†_n+1ĉ_n + h.c.) + ∑_n ε_n ĉ^†_n ĉ_n ,
where τ is the time variable, t is the (homogeneous) tunneling energy, and ĉ_n (ĉ^†_n) is the annihilation (creation) operator for the momentum state with index n (momentum p_n = 2nħ k). The tunneling phases φ_n and site energies ε_n are controlled through the phases and detunings of the two-photon momentum Bragg transitions, respectively. After a variable duration of laser-driven dynamics, we perform direct absorption imaging of the final distribution of momentum states, which naturally separate during 18 ms time of flight. Analysis of these distributions, including determination of site populations through a multi-Gaussian fit, is as described in Ref. <cit.>.
As a control, we first examine the case of no disorder, with all site-energies set to zero and uniform, static tunneling phases φ_n(τ) = φ. Figure FIG:fig2(a,i) shows the evolution of the 1D momentum distribution, obtained from time-of-flight images integrated along the axis normal to the imparted momentum, displaying ballistic expansion characteristic of a continuous-time quantum walk. For times before the atoms reflect from the open boundaries of the 21-site lattice, we find good qualitative agreement between the observed momentum distributions and the expected form P_n = |J_n (ϑ)|^2, where J_n is the Bessel function of order n and ϑ = 2 τ t/ħ. Figure FIG:fig2(b,i) shows the (symmetrized) momentum profile at time τ∼ 3 ħ/t along with the Bessel function distribution for ϑ = 5.4.
In comparison, Fig. FIG:fig2(a,ii) shows the case of zero site energies and static, random tunneling phases φ_n ∈ [0,2π). The dynamics are nearly identical to the case of uniform tunneling phases. This is consistent with the expectation that any pattern of static tunneling phases in 1D is irrelevant for the dynamics of the effective tight-binding model realized by our controlled laser coupling. For this case, Fig. FIG:fig2(b,ii) shows the (symmetrized) momentum profile at τ∼ 2.5 ħ/t along with the Bessel function distribution for ϑ = 5.35.
While static phase disorder has little impact on the quantum random walk dynamics, we may generally expect that controlled random phase jumps or even pseudorandom variations of the phases should inhibit coherent transport, mimicking random phase shifts induced through interaction with a thermal environment.
To probe such behavior, we implement dynamical phase disorder by composing each tunneling phase φ_n from a broad spectrum of oscillatory terms with randomly-defined phases θ_n,i but well-defined frequencies ω_i, the weights of which are derived from an ohmic bath distribution. Specifically, the dynamical tunneling phases take the form
φ_n(τ) = 4π∑_i = 1^N S(ω_i) cos (ω_i τ + θ_n,i) / ∑_i = 1^N S(ω_i),
where S(ω) = (ħω / k_B T) exp[-(ħω / k_B T)],
the θ_n,i are randomly chosen from [0, 2π ), and T is an artificial temperature scale that sets the range of the frequency distribution. In this discrete formulation of φ_n(τ), we include N=50 frequencies ranging between zero and 8 k_B T / ħ. The frequency spectrum and dynamics for one tunneling phase φ_n(τ) are shown in Fig. FIG:fig2(c) for the case of k_BT/t = 1.
Figure FIG:fig2(a,iii) displays the population dynamics in the presence of this dynamical disorder, characterized by an effective temperature k_B T/t = 0.66(1) and averaged over three independent realizations of the disorder. The dynamics no longer feature ballistically separating wavepackets, instead displaying a broad, slowly spreading distribution peaked near zero momentum. A clear deviation of the (symmetrized) momentum distribution from the form P_n = |J_n (ϑ)|^2 describing the previous quantum walk dynamics can be seen in Fig. FIG:fig2(b,iii) (shown at the time τ∼ 3.8 ħ/t). The displayed Gaussian population distribution gives much better agreement, consistent with spreading governed by an effectively classical or thermal random walk.
Lastly, while no influence of static tunneling phase disorder is expected in 1D, the effect of static site-energy disorder is dramatically different.
Here, with homogeneous static tunneling terms, we explore the influence of pseudorandom variations of the site energies governed by the Aubry-André model <cit.>.
With an irrational periodicity b=(√(5)-1)/2, the site energies ε_n = Δcos(2 π b n + ϕ) do not repeat, and are governed by a pseudorandom distribution.
For an infinite system, this Aubry-André model with diagonal disorder features a metal-insulator transition at the critical disorder strength Δ_c = 2 t.
The expansion dynamics for the strong disorder case Δ / t = 5.9(1) are shown in Fig. FIG:fig2(a,iv), with population largely restricted to the initial, central momentum order. The exponentially localized distribution of site populations (symmetrized and averaged over all profiles in the range τ∼ 5-6.3 ħ/t) is shown in Fig. FIG:fig2(b,iv), along with an exponential distribution with localization length ξ = 0.6 lattice sites. Analogous population distributions (again symmetrized and averaged over the same time range) are shown for the cases of weaker disorder [Δ/t = 0.98(1), 1.96(3), 3.05(4), 4.02(9)] in Fig. FIG:fig2(d), exhibiting an apparent transition to exponential localization for Δ/t ≳ 2.
For all of the explored cases, we study these expansion dynamics in greater detail in Fig. <ref>.
Figure FIG:fig3(a) examines the momentum-width (σ_p) dynamics of the atomic distributions for the cases of static and dynamic random phase disorder. For static phase disorder, we observe a roughly linear increase of σ_p until population reflects from the open system boundaries, while dynamical phase disorder leads to sub-ballistic expansion.
In particular, for time τ measured in units of ħ/t and momentum-width σ_p in units of the site separation 2 ħ k, these two cases agree well with the displayed theory curves for ballistic and diffusive expansion, having the forms σ_p = √(2)τ and σ_p = √(2 τ), respectively (with the latter curve shifted by 0.35 ħ /t).
To explore these two different expansions more quantitatively, we fit the momentum variance V_p ≡σ_p^2 to a power-law V_p(τ) = ατ^γ, performing a linear fit to variance dynamics on a double logarithmic scale as shown in Fig. FIG:fig3(c).
The fit-determined expansion exponents γ for the cases of static and dynamically disordered tunneling phases are 2.05(2) and 1.27(2), respectively. These values are roughly consistent with a coherent, quantum random walk for the case of static tunneling phases and an incoherent, nearly diffusive random walk for the case of dynamical phase disorder.
The observed transport dynamics cross over from ballistic to diffusive as the effective thermal energy scale k_B T approaches the coherent tunneling energy t, matching our expectation that randomly-varying tunneling phases can mimic the random dephasing induced by a thermal environment.
We note that similar classical random walk behavior has been seen previously for both atoms and photons, due to irreversible decoherence <cit.> and thermal excitations <cit.>.
However, this is the first observation based on reversible engineered “noise” of a Hamiltonian parameter.
These observations of a thermal random walk suggest that annealed disorder may provide a means of mimicking thermal fluctuations and studying thermodynamical properties <cit.> of simulated models using atomic momentum-space lattices, and by extension other nonequilibrium experimental platforms such as photonic simulators.
We also analyze the full expansion dynamics for the case of static site energy disorder in Figs. FIG:fig3(b,d). For homogeneous static tunnelings and thus zero disorder (Δ/t =0), we observe momentum-width dynamics similar to the case of static random tunneling phases, but with one distinct difference: while σ_p features a linear increase for random static phases, it increases in a step-wise fashion for uniform tunneling phases. This slight disagreement is a byproduct of the underlying laser-driven dynamics that give rise to the effective tight-binding model described by Eq. <ref>. The Bragg laser field 2 (see Fig. <ref>) features a comb of 20 discrete, equally-spaced frequencies, each of which primarily addresses a single Bragg transition. Weak off-resonant coupling terms conspire to produce this step-like behavior in the case of equal-phase driving, while this behavior is mostly absent for random tunneling phases.
Evolution of the momentum-width (σ_p) for the site-energy disorder cases of Δ/t = 0.98(1), 2.47(3), 5.9(1) are also shown in Fig. FIG:fig3(b). We observe the reduction of expansion dynamics with increasing disorder, with nearly arrested dynamics in the strong disorder limit. More quantitatively, fits of the variance dynamics as shown in Fig. FIG:fig3(d) reveal sub-ballistic, nearly diffusive expansion for intermediate disorder [γ = 1.00(2) for Δ/t = 0.98(1)], giving way to a nearly vanishing expansion exponent for strong disorder [γ = 0.12(6) for Δ/t = 5.9(1)].
The extracted expansion exponents for all of the explored cases are summarized in Fig. FIG:fig3(e). For static site-energy disorder (red circles), while longer expansion times than those explored (τ≲ 6.3 ħ/t) would better distinguish insulating behavior from sub-ballistic and sub-diffusive expansion, a clear trend towards arrested transport (γ∼ 0) is found for Δ/t ≫ 1. Combined with the observation of exponential localization of the site populations in Fig. FIG:fig2(b,iv) and Fig. FIG:fig2(d), these observations are consistent with a crossover in our 21-site system from metallic behavior to quantum localization for Δ/t ≳ 2.
Our observations of a crossover from ballistic expansion (γ∼ 2) to nearly diffusive transport (γ∼ 1) for randomly fluctuating tunneling phase disorder are also summarized in Fig. FIG:fig3(e).
In the experimentally-accessible regime of low to moderate effective thermal energies (k_B T / t ≲ 1), our experimental data points (blue squares) match up well with numerical simulation (open black circles).
For the magnitude of tunneling energy used in these experiments, we are restricted from exploring higher effective temperatures (k_B T /t ≳ 1), as rapid variations of the tunneling phases introduce spurious spectral components of the Bragg laser fields that could drive undesired transitions.
Simulations in this high-temperature regime suggest that the expansion exponent should rise back up for increasing temperatures, saturating to a value γ∼ 2.
This results from the fact that the time-averaged phase effectively vanishes when the timescale of pseudorandom phase variations is much shorter than the tunneling time.
The demonstrated levels of local and time-dependent control over tunneling elements and site energies in our synthetic momentum-space lattice have allowed us to perform first-of-their-kind explorations of annealed disorder in an atomic system. Such an approach based on synthetic dimensions should enable myriad future explorations of engineered Floquet dynamics <cit.> and novel disordered lattices <cit.>.
Furthermore, the realization of designer disorder in a system that supports nonlinear atomic interactions <cit.> should permit us to explore novel aspects of many-body localization <cit.>.
52
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Sanchez-Palencia and Lewenstein(2010)]Sanchez-Palencia-Lewenstein-NP-2010
author author L. Sanchez-Palencia and author M. Lewenstein, 10.1038/nphys1507 journal
journal Nat. Phys. volume 6, pages 87 (year 2010)NoStop
[Moore et al.(1995)Moore,
Robinson, Bharucha, Sundaram, and Raizen]Moore-Qdeltakicked-1995
author author F. L. Moore, author J. C. Robinson,
author C. F. Bharucha, author B. Sundaram, and author M. G. Raizen, 10.1103/PhysRevLett.75.4598 journal journal
Phys. Rev. Lett. volume 75, pages
4598 (year 1995)NoStop
[Chabé et al.(2008)Chabé, Lemarié, Grémaud,
Delande, Szriftgiser, and Garreau]Chabe-AndersonMetal-2008
author author J. Chabé, author G. Lemarié, author B. Grémaud, author D. Delande, author P. Szriftgiser, and author J. C. Garreau, 10.1103/PhysRevLett.101.255702 journal journal Phys. Rev. Lett. volume 101, pages 255702 (year
2008)NoStop
[Roati et al.(2008)Roati,
D'Errico, Fallani, Fattori,
Fort, Zaccanti, Modugno,
Modugno, and Inguscio]Roati-AndersonLocalization-2008
author author G. Roati, author C. D'Errico,
author L. Fallani, author M. Fattori, author
C. Fort, author M. Zaccanti, author G. Modugno, author M. Modugno, and author M. Inguscio, 10.1038/nature07071
journal journal Nature volume 453, pages 895 (year
2008)NoStop
[Billy et al.(2008)Billy,
Josse, Zuo, Bernard,
Hambrecht, Lugan, Clément, Sanchez-Palencia, Bouyer, and Aspect]Billy-AndersonLocalization-2008
author author J. Billy, author V. Josse,
author Z. Zuo, author
A. Bernard, author B. Hambrecht, author P. Lugan, author D. Clément, author L. Sanchez-Palencia, author P. Bouyer, and author A. Aspect, 10.1038/nature07000
journal journal Nature volume 453, pages 891 (year
2008)NoStop
[Kondov et al.(2011)Kondov,
McGehee, Zirbel, and DeMarco]KondovAnderson-2011
author author S. S. Kondov, author W. R. McGehee,
author J. J. Zirbel, and author B. DeMarco, 10.1126/science.1209019 journal journal
Science volume 334, pages 66
(year 2011)NoStop
[Jendrzejewski et al.(2012)Jendrzejewski, Bernard, Müller,
Cheinet, Josse, Piraud,
Pezzé, Sanchez-Palencia, Aspect, and Bouyer]Jendre-3D-Anderson
author author F. Jendrzejewski, author A. Bernard, author K. Müller,
author P. Cheinet, author V. Josse, author
M. Piraud, author L. Pezzé, author L. Sanchez-Palencia, author A. Aspect, and author P. Bouyer, 10.1038/nphys2256 journal journal Nat. Phys. volume
8, pages 398 (year 2012)NoStop
[Semeghini et al.(2015)Semeghini, Landini, Castilho, Roy, Spagnolli, Trenkwalder, Fattori, Inguscio, and Modugno]Ing-MobEdge
author author G. Semeghini, author M. Landini,
author P. Castilho, author S. Roy, author
G. Spagnolli, author
A. Trenkwalder, author
M. Fattori, author M. Inguscio, and author G. Modugno, 10.1038/nphys3339 journal journal Nat. Phys. volume
11, pages 554 (year 2015)NoStop
[Fallani et al.(2007)Fallani, Lye, Guarrera, Fort, and Inguscio]Fallani-TowardsBG-2007
author author L. Fallani, author J. E. Lye,
author V. Guarrera, author C. Fort, and author
M. Inguscio, 10.1103/PhysRevLett.98.130404 journal journal
Phys. Rev. Lett. volume 98, pages
130404 (year 2007)NoStop
[White et al.(2009)White,
Pasienski, McKay, Zhou,
Ceperley, and DeMarco]White-SpeckleDisorder-2009
author author M. White, author M. Pasienski,
author D. McKay, author S. Q. Zhou, author
D. Ceperley, and author
B. DeMarco, 10.1103/PhysRevLett.102.055301 journal journal
Phys. Rev. Lett. volume 102, pages
055301 (year 2009)NoStop
[Pasienski et al.(2010)Pasienski, McKay, White, and DeMarco]Pasienski-Disorder-2010
author author M. Pasienski, author D. McKay,
author M. White, and author B. DeMarco, 10.1038/nphys1726 journal journal Nat. Phys. volume 6, pages 677 (year
2010)NoStop
[Gadway et al.(2011)Gadway,
Pertot, Reeves, Vogt, and Schneble]Gadway-11-PRL
author author B. Gadway, author D. Pertot,
author J. Reeves, author M. Vogt, and author
D. Schneble, 10.1103/PhysRevLett.107.145306 journal journal
Phys. Rev. Lett. volume 107, pages
145306 (year 2011)NoStop
[Meldgin et al.(2016)Meldgin, Ray, Russ, Chen,
Ceperley, and DeMarco]Carrie-Ray-NatPhys
author author C. Meldgin, author U. Ray,
author P. Russ, author
D. Chen, author D. M. Ceperley, and author B. DeMarco, 10.1038/nphys3695 journal journal Nat. Phys. volume
4, pages 945 (year 2016)NoStop
[D'Errico et al.(2014)D'Errico, Lucioni, Tanzi, Gori, Roux, McCulloch, Giamarchi, Inguscio, and Modugno]DErrico-1D-Dis
author author C. D'Errico, author E. Lucioni,
author L. Tanzi, author L. Gori, author
G. Roux, author I. P. McCulloch, author T. Giamarchi, author M. Inguscio, and author G. Modugno, 10.1103/PhysRevLett.113.095301 journal journal
Phys. Rev. Lett. volume 113, pages
095301 (year 2014)NoStop
[Kondov et al.(2015)Kondov,
McGehee, Xu, and DeMarco]Kondov-MBL
author author S. S. Kondov, author W. R. McGehee,
author W. Xu, and author B. DeMarco, 10.1103/PhysRevLett.114.083002 journal journal
Phys. Rev. Lett. volume 114, pages
083002 (year 2015)NoStop
[Schreiber et al.(2015)Schreiber, Hodgman, Bordia, Lüschen, Fischer, Vosk, Altman, Schneider, and Bloch]Schreiber842
author author M. Schreiber, author S. S. Hodgman, author P. Bordia,
author H. P. Lüschen,
author M. H. Fischer, author R. Vosk, author
E. Altman, author U. Schneider, and author I. Bloch, 10.1126/science.aaa7432
journal journal Science volume 349, pages 842 (year
2015)NoStop
[Choi et al.(2016)Choi,
Hild, Zeiher, Schauß,
Rubio-Abadal, Yefsah, Khemani, Huse, Bloch, and Gross]MBL-Gross
author author J.-y. Choi, author S. Hild, author J. Zeiher, author
P. Schauß, author
A. Rubio-Abadal, author
T. Yefsah, author V. Khemani, author D. A. Huse, author I. Bloch, and author C. Gross, 10.1126/science.aaf8834 journal
journal Science volume 352, pages 1547 (year 2016)NoStop
[Yan et al.(2016)Yan,
Hui, Rigol, and Scarola]MBL-Vito
author author M. Yan, author H.-Y. Hui,
author M. Rigol, and author V. W. Scarola, @noop (year 2016), http://arxiv.org/abs/1606.03444
arXiv:1606.03444 NoStop
[Celi et al.(2014)Celi,
Massignan, Ruseckas, Goldman,
Spielman, Juzeli u̅ūnas, and Lewenstein]Celi-ArtificialDim
author author A. Celi, author P. Massignan,
author J. Ruseckas, author N. Goldman, author
I. B. Spielman, author
G. Juzeli u̅ūnas, and author M. Lewenstein, 10.1103/PhysRevLett.112.043001 journal journal Phys. Rev. Lett. volume 112, pages 043001 (year
2014)NoStop
[Stuhl et al.(2015)Stuhl,
Lu, Aycock, Genkina, and Spielman]Stuhl-Edge-2015
author author B. K. Stuhl, author H.-I. Lu,
author L. M. Aycock, author D. Genkina, and author I. B. Spielman, 10.1126/science.aaa8515 journal journal
Science volume 349, pages 1514
(year 2015)NoStop
[Mancini et al.(2015)Mancini, Pagano, Cappellini, Livi, Rider, Catani, Sias,
Zoller, Inguscio, Dalmonte, and Fallani]Fallani-chiral-2015
author author M. Mancini, author G. Pagano,
author G. Cappellini, author L. Livi, author
M. Rider, author J. Catani, author C. Sias, author P. Zoller, author M. Inguscio,
author M. Dalmonte, and author L. Fallani, 10.1126/science.aaa8736 journal journal
Science volume 349, pages 1510
(year 2015)NoStop
[Meier et al.(2016a)Meier, An, and Gadway]Meier-AtomOptics
author author E. J. Meier, author F. A. An, and author B. Gadway, 10.1103/PhysRevA.93.051602 journal journal Phys. Rev. A volume 93, pages 051602 (year 2016a)NoStop
[Meier et al.(2016b)Meier, An, and Gadway]Meier-SSH
author author E. J. Meier, author F. A. An, and author B. Gadway, http://dx.doi.org/10.1038/ncomms13986 journal journal Nat. Commun. volume 7, pages
13986 (year 2016b)NoStop
[An et al.(2016)An,
Meier, and Gadway]An-FluxLadder
author author F. A. An, author E. J. Meier, and author B. Gadway, @noop (year 2016), http://arxiv.org/abs/1609.09467 arXiv:1609.09467 NoStop
[Wall et al.(2016)Wall,
Koller, Li, Zhang,
Cooper, Ye, and Rey]Wall-Synth
author author M. L. Wall, author A. P. Koller,
author S. Li, author
X. Zhang, author N. R. Cooper, author J. Ye, and author A. M. Rey, 10.1103/PhysRevLett.116.035301
journal journal Phys. Rev. Lett. volume 116, pages 035301 (year
2016)NoStop
[Kolkowitz et al.(2016)Kolkowitz, Bromley, Bothwell, Wall, Marti, Koller, Zhang,
Rey, and Ye]Kolkowitz-Synth
author author S. Kolkowitz, author S. L. Bromley, author T. Bothwell,
author M. L. Wall, author G. E. Marti, author
A. P. Koller, author
X. Zhang, author A. M. Rey, and author J. Ye, http://dx.doi.org/10.1038/nature20811
journal journal Nature volume advance online publication (year 2016)NoStop
[Livi et al.(2016)Livi,
Cappellini, Diem, Franchi,
Clivati, Frittelli, Levi,
Calonico, Catani, Inguscio, and Fallani]Livi-Synth
author author L. F. Livi, author G. Cappellini,
author M. Diem, author
L. Franchi, author C. Clivati, author M. Frittelli, author F. Levi, author D. Calonico, author J. Catani,
author M. Inguscio, and author L. Fallani, 10.1103/PhysRevLett.117.220401 journal journal Phys. Rev. Lett. volume 117, pages 220401 (year 2016)NoStop
[Gadway(2015)]Gadway-KSPACE
author author B. Gadway, 10.1103/PhysRevA.92.043606 journal journal Phys. Rev. A volume
92, pages 043606 (year 2015)NoStop
[Amir et al.(2009)Amir,
Lahini, and Perets]LahiniDiffusion
author author A. Amir, author Y. Lahini, and author H. B. Perets, 10.1103/PhysRevE.79.050105 journal journal Phys. Rev. E volume 79, pages 050105 (year 2009)NoStop
[Price et al.(2016)Price,
Ozawa, and Goldman]NateGold-TrapShake
author author H. M. Price, author T. Ozawa, and author N. Goldman, @noop
(year 2016), http://arxiv.org/abs/1605.09310
arXiv:1605.09310 NoStop
[Christodoulides et al.(2003)Christodoulides, Lederer, and Silberberg]Christ-NatRev
author author D. N. Christodoulides, author F. Lederer, and author Y. Silberberg, 10.1038/nature01936 journal
journal Nature volume 424, pages 817 (year 2003)NoStop
[Schwartz et al.(2007)Schwartz, Bartal, Fishman, and Segev]And-Light-Seg-07
author author T. Schwartz, author G. Bartal,
author S. Fishman, and author M. Segev, 10.1038/nature05623 journal journal
Nature volume 446, pages 52
(year 2007)NoStop
[Szameit and Nolte(2010)]SzameitReview-2010
author author A. Szameit and author S. Nolte, 10.1088/0953-4075/43/16/163001 journal journal J. Phys. B volume
43, pages 163001 (year 2010)NoStop
[Segev et al.(2013)Segev,
Silberberg, and Christodoulides]AndersonLight-Review
author author M. Segev, author Y. Silberberg, and author D. N. Christodoulides, 10.1038/nphoton.2013.30 journal journal Nat. Photon. volume
7, pages 197 (year 2013)NoStop
[Aspuru-Guzik and Walther(2012)]PhotRev-NatPhys-2012
author author A. Aspuru-Guzik and author P. Walther, doi:10.1038/nphys2253 journal
journal Nat. Phys. volume 8, pages 285 (year 2012)NoStop
[Lee and Fisher(1981)]Lee-Fisher-RandomFlux-1981
author author P. A. Lee and author D. S. Fisher, 10.1103/PhysRevLett.47.882 journal journal Phys. Rev. Lett. volume 47, pages 882 (year 1981)NoStop
[Ludwig et al.(1994)Ludwig,
Fisher, Shankar, and Grinstein]RandomField1
author author A. W. W. Ludwig, author M. P. A. Fisher, author R. Shankar, and author G. Grinstein, 10.1103/PhysRevB.50.7526 journal journal Phys. Rev. B volume
50, pages 7526 (year 1994)NoStop
[Chamon et al.(1996)Chamon,
Mudry, and Wen]RandomField2
author author C. d. C. Chamon, author C. Mudry, and author X.-G. Wen, 10.1103/PhysRevLett.77.4194 journal
journal Phys. Rev. Lett. volume 77, pages 4194 (year 1996)NoStop
[Brun et al.(2003)Brun,
Carteret, and Ambainis]QtoC-Theory-Decoh
author author T. A. Brun, author H. A. Carteret,
and author A. Ambainis, 10.1103/PhysRevLett.91.130602 journal
journal Phys. Rev. Lett. volume 91, pages 130602 (year 2003)NoStop
[Broome et al.(2010)Broome,
Fedrizzi, Lanyon, Kassal,
Aspuru-Guzik, and White]Broome-White-QtoC-Decoherence
author author M. A. Broome, author A. Fedrizzi,
author B. P. Lanyon, author I. Kassal, author
A. Aspuru-Guzik, and author
A. G. White, 10.1103/PhysRevLett.104.153602 journal journal
Phys. Rev. Lett. volume 104, pages
153602 (year 2010)NoStop
[Schreiber et al.(2011)Schreiber, Cassemiro, Potoček,
Gábris, Jex, and Silberhorn]Silberhorn-DisorderAndDecoherence
author author A. Schreiber, author K. N. Cassemiro, author V. Potoček, author A. Gábris, author I. Jex, and author C. Silberhorn, 10.1103/PhysRevLett.106.180403 journal
journal Phys. Rev. Lett. volume 106, pages 180403 (year 2011)NoStop
[Karski et al.(2009)Karski,
Förster, Choi, Steffen,
Alt, Meschede, and Widera]Karski174
author author M. Karski, author L. Förster,
author J.-M. Choi, author A. Steffen, author
W. Alt, author D. Meschede, and author A. Widera, 10.1126/science.1174436
journal journal Science volume 325, pages 174 (year
2009)NoStop
[Fukuhara et al.(2013)Fukuhara, Kantian, Endres, Cheneau, Schauß, Hild, Bellem, Schollwöck, Giamarchi,
Gross, Bloch, and Kuhr]fukuhara:quantum_2013
author author T. Fukuhara, author A. Kantian,
author M. Endres, author M. Cheneau, author
P. Schauß, author
S. Hild, author D. Bellem, author U. Schollwöck, author T. Giamarchi, author C. Gross, author I. Bloch, and author S. Kuhr, 10.1038/nphys2561 journal journal Nat. Phys. volume
9, pages 235 (year 2013)NoStop
[Osterloh et al.(2005)Osterloh, Baig, Santos, Zoller, and Lewenstein]Osterloh-NonAbel-2005PRL
author author K. Osterloh, author M. Baig,
author L. Santos, author P. Zoller, and author
M. Lewenstein, 10.1103/PhysRevLett.95.010403 journal journal
Phys. Rev. Lett. volume 95, pages
010403 (year 2005)NoStop
[Rudner et al.(2013)Rudner,
Lindner, Berg, and Levin]Rudner
author author M. S. Rudner, author N. H. Lindner,
author E. Berg, and author M. Levin, 10.1103/PhysRevX.3.031005 journal journal Phys.
Rev. X volume 3, pages 031005
(year 2013)NoStop
[Mukherjee et al.(2017)Mukherjee, Spracklen, Valiente,
Andersson, Öhberg, Goldman, and Thomson]Goldman-Floq-And
author author S. Mukherjee, author A. Spracklen, author M. Valiente,
author E. Andersson, author P. Öhberg, author
N. Goldman, and author
R. R. Thomson, 10.1038/ncomms13918 journal journal Nat.
Commun. volume 8, pages 13918
(year 2017)NoStop
[Maczewsky et al.(2017)Maczewsky, Zeuner, Nolte, and Szameit]Szameit-Floq-And
author author L. J. Maczewsky, author J. M. Zeuner, author S. Nolte, and author A. Szameit, 10.1038/ncomms13756 journal journal
Nat. Commun. volume 8, pages 13756
(year 2017)NoStop
[Titum et al.(2016)Titum,
Berg, Rudner, Refael, and Lindner]Titum-Floq-And
author author P. Titum, author E. Berg,
author M. S. Rudner, author G. Refael, and author
N. H. Lindner, 10.1103/PhysRevX.6.021013 journal journal Phys.
Rev. X volume 6, pages 021013
(year 2016)NoStop
[Kosior and Sacha(2017)]Kosior-RandomFractals
author author A. Kosior and author K. Sacha, @noop (year 2017), http://arxiv.org/abs/1701.04274 arXiv:1701.04274 NoStop
[Dunlap et al.(1990)Dunlap,
Wu, and Phillips]Dunlap-1990
author author D. H. Dunlap, author H.-L. Wu, and author P. W. Phillips, 10.1103/PhysRevLett.65.88 journal journal Phys. Rev. Lett. volume 65, pages 88 (year 1990)NoStop
[Rolston and Phillips(2002)]Rolston-NL-2002
author author S. L. Rolston and author W. D. Phillips, 10.1038/416219a journal
journal Nature volume 416, pages 219 (year 2002)NoStop
[Aleiner et al.(2010)Aleiner, Altshuler, and Shlyapnikov]aleiner:finite_temperature_disorder_2010
author author I. L. Aleiner, author B. L. Altshuler, and author G. V. Shlyapnikov, 10.1038/nphys1758 journal
journal Nat. Phys. volume 6, pages 900 (year 2010)NoStop
|
http://arxiv.org/abs/1701.07836v2 | 20170126190005 | Quantum Hall ferroelectrics and nematics in multivalley systems | [
"Inti Sodemann",
"Zheng Zhu",
"Liang Fu"
] | cond-mat.str-el | [
"cond-mat.str-el"
] | |
http://arxiv.org/abs/1701.07754v2 | 20170126161145 | Personalized instructor responses to guided student reflections: Analysis of two instructors' perspectives and practices | [
"Daniel L. Reinholz",
"Dimitri R. Dounas-Frazer"
] | physics.ed-ph | [
"physics.ed-ph"
] |
daniel.reinholz@sdsu.edu
Department of Mathematics & Statistics, San Diego State University, San Diego, CA 92182, USA
Department of Physics, University of Colorado Boulder, Boulder, CO 80309, USA
One way to foster a supportive culture in physics departments is for instructors to provide students with personal attention regarding their academic difficulties. To this end, we have developed the Guided Reflection Form (GRF), an online tool that facilitates student reflections and personalized instructor responses. In the present work, we report on the experiences and practices of two instructors who used the GRF in an introductory physics lab course. Our analysis draws on two sources of data: (i) post-semester interviews with both instructors and (ii) the instructors' written responses to 134 student reflections. Interviews focused on the instructors' perceptions about the goals and framing of the GRF activity, and characteristics of good or bad feedback, and impacts of the GRF on the nature of teacher-student relationships. Their GRF responses were analyzed for the presence of up to six types of statement: encouraging statements, normalizing statements, empathizing statements, strategy suggestions, resource suggestions, and feedback to the student on the structure of their reflection. We find that both instructors used all six response types, and that they both perceived that the GRF played an important role in the formation of meaningful connections with their studentsin alignment with their perceptions of what counts as good feedback. This exploratory qualitative investigation demonstrates that the GRF can serve as a mechanism for instructors to pay personal attention to their students. In addition, it opens the door to future work about the impact of the GRF on student-teacher relationshipsinteractions.
Personalized instructor responses to guided student reflections:
Analysis of two instructors' perspectives and practices
Dimitri R. Dounas-Frazer
December 30, 2023
==========================================================================================================================
§ INTRODUCTION
Reflection is an important skill in learning physics,<cit.> and is a key part of learning more generally.<cit.> Previously we have described how structured reflection activities can augment physics courses that focus on iterative improvement of models<cit.> and apparatuses.<cit.> We have also developed an online tool, the Guided Reflection Form (GRF), that facilitates student reflection and personalized instructor responses.<cit.>. The GRF was designed to support students in describing a past experience, setting a goal for improvement, and identifying specific steps for achieving that goal. In a study of the GRF, we focused on the structure of students' reflections in a physics course for future teachers; students in that study successfully used the GRF to narrate specific experiences upon which they wanted to improve and articulate goals and/or action plans for improvement.<cit.> In this article, we explore the GRF activity in a different context and from a different perspective. Here, we focus on how the GRF was implemented in an introductory lab course, and we characterize the types of feedback that the two instructors of that course provided in response to their students' reflections.
Our analysis of instructors' responses to students' reflections is motivated by an overarching desire to cultivate a culture of support and inclusiveness in undergraduate physics courses. In particular, we aim to develop and implement research-based educational tools that may counter weed-out culture. We consider weed-out culture to be a set of traditional educational practices and beliefs aimed at sorting and selecting the students seen as most capable, while “weeding out" the rest (i.e. removing them from the system). In their landmark study of undergraduate student attrition from science, mathematics, and engineering majors, Seymour and Hewitt described the disproportionate impacts of this culture on marginalized groups:<cit.>
The most serious criticisms of the weed-out system, however, focused on its disproportionate impact on men of color and on all women. Even well-prepared, these two groups tend to enter basic classes feeling uncertain about whether they `belong.' The loss of regular contact with high school teachers who encouraged them to believe in their ability to do science exposes the frailty of their self-confidence. Faculty who teach weed-out classes discourage the kind of personal contact and support which was an important part of high school learning. It is, as some students describe, a `weaning away' process by which faculty transmit the message that it is time to grow up, cast aside dependence on personally-significant adults and take responsibility for their own learning. This attitude is perceived by students in the reluctance of teachers to answer questions, brusqueness in response to `trivial' inquiries, failure to offer praise or encouragement, disinclination to discuss academic difficulties in a personal manner, carelessness in keeping office hours, and a `no excuses' stance on test results. The difficulty of getting personal attention was troubling to many students, but it was especially troubling to those whose presence in [science, mathematics and engineering] classes was the result of considerable personal attention and encouragement by particular high school teachers. (p. 132)
The GRF was designed to provide avenues of communication through which instructors and students can engage in precisely those interactions that are discouraged by weed-out culture. As educators ourselves, the authors of the paper have used the GRF to this end in multiple contexts. In the current study, our goal was to understand the extent to which the GRF opens up such opportunities for other instructors, particularly those who were not involved in the iterative design process through which the GRF was developed. Indeed, as we will show, both instructors in our study perceived the GRF as valuable for developing personally-significant relationships with their students, and both used itused the GRF to provide their students with personal attention and encouragement.
When imagining how instructors might ideally use the GRF to foster supportive student-teacher relationshipsinteractions, Brown's metaphor of “sitting on the same side of the table"<cit.> is helpful. Brown drew on this metaphor to create a checklist for feedback that includes items like sitting “next to you rather than across from you," putting “the problem in front of us rather than between us (or sliding it toward you)," and modeling “the vulnerability and openness that I expect to see from you" (p. 204). After outlining her checklist, she asked,
How would education be different if students, teachers, and parents sat on the same side of the table? How would engagement change if leaders sat down next to folks and said, “Thank you for your contributions. Here's how you're making a difference. This issue is getting in the way of your growth, and I think we can tackle it together. What ideas do you have about moving forward? What role do you think I'm playing in the problem? What can I do differently to support you?" (pp. 204–205)
The image of two people sitting on the same side of the table inspires our vision for how the GRF could shape classroom practices in physics: instructors and students working side-by-side to tackle academic problems together, building meaningful student-teacher relationships along the way.
We present a qualitative exploration of two instructors' implementations of the GRF in a lab course for first-semester undergraduate students interested in majoring in physics. The instructors were physics graduate students, and the course was designed and offered as part of a student-led diversity initiative in the instructors' physics department. We conducted hour-long post-semester interviews with both instructors, and we collected electronic copies of 134 student reflections and corresponding instructor responses that were generated via the GRF. Using these data, we construct a rich picture of each instructor's unique implementation.
This paper is organized as follows. In Sec. <ref>, we describe the GRF activity and provide a brief overview of some of the literature on feedback. We describe the programmatic and course context for our study in Sec. <ref>, outline our research methods in Sec. <ref>, and present results from our analyses of instructors' interviews and GRF responses in Sec. <ref>. Finally, in Sec. <ref>, we summarize our findings, discuss their implications, and identify potential future directions for research and development of the GRF.
§ BACKGROUND
We begin our discussion by describing the GRF and summarizing relevant literature about instructor feedback practices. When describing the GRF, we focus on how it has typically been used in other courses we have taught and/or studied. The instructors in the present study deviated slightly from this typical usage, as discussed in Sec. <ref>.
§.§ Guided Reflection Form
The GRF has been described in detail elsewhere,<cit.> so we provide only a brief overview here. The GRF is an online tool, similar to a survey, that provides questions and other prompts to guide student reflections about issues of resilience, collaboration, and organization. Once per week, students are tasked with submitting a reflection via the GRF. Reflections may focus on any aspect of the students' learning experience, whether or not it is directly related to the course in which the GRF is being implemented. Instructors then read the reflections and provide individualized responses to each student based on the content of their (the students') reflection. This cycle of reflection and feedback repeats, ideally facilitating an ongoing written dialogue between each student and the instructor.
Student responses can be collected by having students complete an online survey or submit individual electronic documents. In the former case, instructors can export student reflections into a spreadsheet, write their responses in the spreadsheet, and then use a mail merge program to generate individual documents with student responses and corresponding instructor feedback. In the latter case, the instructor can write their feedback directly on the submitted document. Based on our own experiences using the GRF, responding to reflections takes about 3 to 5 min per student. Instructors in this study reported spending about 10 to 15 min per student responding to reflections. In large classes, the time required for an instructor to respond to each student individually can be prohibitively large; hence, this activity is most suitable for classes with 10 to 20 students per instructor or, in larger courses, per teaching or learning assistant. The GRF has been implemented in a variety of contexts, including high school-level computer science and upper-division quantum mechanics.
When using the GRF, students are presented with a prompt instructing them to recall a scenarioan experience from the previous week upon which they would like to improve. Such experiences could include, for example, procrastinating on a long-term project.<cit.> Next, they are asked to choose one of three focus areas for reflection: bouncing back from failure or other setbacks; building a network and developing collaboration skills; or becoming an organized, self-aware, and mindful person. For students who would prefer to write about a different topic, the GRF includes a fourth option for “something different." After students choose a topic for their reflection, the GRF presents a short paragraph describing the importance of the skills related to the topic. Regardless of topic, students are asked to write short responses to two reflection prompts:
* Describe the specific experience from last week that you would like to improve upon.
* Describe an aspect of this experience that you can improve in the future. (Provide at least one concrete strategy that you will use to become more successful.)
The GRF prompts were designed with three aspects of reflection in mind: students should (i) revisit a salient experience from the previous week, (ii) set a future goal for improvement, and (iii) articulate specific steps for achieving that goal. In a study of undergraduate students using the GRF in a pedagogy course for future physics teachers, we found that all students successfully used the GRF to engage in multiple aspects reflection.<cit.> In this article, we explore for the first time the ways that instructors use the GRF to provide feedback to their students.
§.§ Feedback
Providing feedback to students has a significant impact on their learning, but not all feedback is equally useful. For instance, there are a number of characteristics that make feedback effective, including specificity and timeliness.<cit.> Process-level feedback is particularly effective for enhancing learning; such feedback focuses on students' ability to strategize about their learning and to seek help when needed.<cit.> When students receive feedback about their learning strategies, it draws attention to the ways in which they can adapt to become more effective learners.<cit.> In contrast, praise can have unpredictable impacts—and can even inhibit learning, especially if it is perceived as undeserved—because it draws students' attention to themselves rather than the task at hand.<cit.>
A popular way of interpreting these findings is through the concept of mindset;<cit.> indeed, this concept informed the perspectives of one of the instructors in our study. Here, “mindset" refers to students' beliefs about the nature of intelligence. Mindset is commonly described using a fixed/growth dichotomy: in the extreme cases, people with a fixed mindset view intelligence as static and unchangeable beyond a predetermined level, whereas those with a growth mindset view intelligence as malleable and something that can be improved with effort.<cit.> Using the language of mindset, providing process-level feedback is consistent with a growth mindset.<cit.> In particular, feedback that emphasizes self-improvement can bolster students' beliefs in their own capability to succeed.<cit.> On the other hand, praising a student for being “smart" may reinforce a fixed mindset,<cit.> and feedback that communicates a lack of faith in a student's capabilities can undermine their confidence, motivation, and willingness to attempt challenging tasks.<cit.>
From this literature, we infer two principles that could support instructors' effective use of the GRF:
P1. Praise should should focus on students' efforts to improve, express confidence in their ability to improve, and be sincere.
P2. Process-level feedback should identify specific areas for improvement and suggest strategies that students can use to improve their learning.
Each principle is also informed by a particular aspect of weed-out culture, as described by Seymour and Hewitt. <cit.> In particular, they identified instructors' “failure to offer praise" and “disinclination to discuss academic difficulties in a personal manner" as factors contributing to weed-out culture. However, because not all forms of praise support student perseverance, P1 recommends a particular process for giving praise. Similarly, P2 can be thought of as a guideline for how instructors can discuss academic difficulties with students. Taken together, these two principles align with part of Brown's vision for students and teachers sitting on the same side of the table. <cit.> For example, should a teacher say to a student, “This issue is getting in the way of your growth, and I think we can tackle it together," they would be identifying an area for improvement (P2) and expressing confidence in the student's ability to improve (P1).
Importantly, feedback must be understood in the context of the learning environment in which it is given and received. For instance, feedback is more effective when instructors create classroom communities that normalize failure and value criticism. Students in these settings are better situated to receive and use feedback.<cit.> Therefore, one way for instructors and students to sit on the “same side of the table" is to be embedded in a culture where students are in the habit of receiving timely, sincere, and critical feedback focused on their strategies for self-improvement. In the sections that follow, we describe a course in which the instructors aspired to foster such a culture, in part by using the GRF.
§ CONTEXT
Our study is an exploratory qualitative investigation of two instructors' feedback practices. As Eisenhart argues,<cit.> such qualitative studies are responsible for “providing sufficient detail about the researched context for a person with intimate knowledge of a second context to judge the likelihood of transferability." (p. 56). Accordingly, we describe the context for our study at three grain sizes: organization, course, and activity.
§.§ Organizational context
The two instructors in our study—Emily and Taylor—were both physics graduate students at the University of Colorado Boulder (CU), a predominantly white public R1 university with a large physics program. Emily was a white woman and Taylor was a white man. They co-taught a course called Foundations of Scientific Investigation (hereafter “Foundations"). Foundations was designed as part of a student-led organization called CU-Prime. CU-Prime is a member of The Access Network (hereafter “Access").[The Access Network, <http://accessnetwork.org>] Access organizations—including The Berkeley Compass Project,<cit.> The Chi-Sci Scholars Program,<cit.> and several other organizations—are characterized by student leadership and a commitment to improving diversity in the physical sciences through community building. To achieve their goals, these organizations offer multiple services designed to support students from underrepresented groups and raise awareness about issues of marginalization in physics. Examples of services include summer programs,<cit.> diversity workshops,<cit.> mentorship programs,<cit.> and courses with multi-week final projects.<cit.> Many of the courses designed and run by Access organizations use the GRF or similar tools to facilitate cycles of student reflection and instructor feedback. In this work, we focus on the implementation of the GRF in Foundations.
§.§ Course description
Foundations was first designed and taught in 2014 and was subsequently refined and taught in 2015 and 2016. It is a 14-week, fall-semester course designed for first-year undergraduate students interested in majoring in physics. On average, 22 students enroll in Foundations each semester. Students from underrepresented and/or minority racial and gender groups are especially encouraged to participate in the course; a demographic breakdown of students who completed the course is provided in Table <ref>. The overarching goals of Foundations are twofold: build community among students enrolled in the course, and introduce students to the practice of research. Two corresponding subgoals are for students to practice developing theoretical models of scientific phenomena, and to reflect on and refine their coursework in Foundations and other courses.
Each semester, the Foundations class met twice weekly for 75 minutes per meeting, and the course consisted of 2 successive 7-week halves. Consistent with the subgoals of the course, each half included both experimental activities that focused on building models as well as activities that engaged students in the practice of reflection. During the first half of the course, students worked in groups on a set of guided optics experiments. They also used the GRF to reflect on their collaboration, organization, and resilience. During the second half, students worked in groups on multi-week final projects under the guidance of graduate student mentors; a similar approach to final projects has been described elsewhere.<cit.> In this part of the course, students used a tool similar to the GRF to reflect on goals, challenges, and successes related to their final projects.
Since its inception, Foundations has been divided into 2 parallel sections of about 10 students. Each section has been co-taught by 2 instructors, for a total of 4 instructors per semester. By design, each co-teaching pair has been mixed gender and has comprised one undergraduate student and one graduate student. Most instructors have only taught the course once, and former instructors meet with new instructors during the summer to discuss teaching strategies for the upcoming fall semester. Emily and Taylor taught Foundations concurrently, but in different sections. Hence, each was a member of a co-teaching pair, but neither was the other's co-teacher.
§.§ GRF activity
We introduced Emily, Taylor, and their co-teachers to the GRF during the summer before they started teaching the course. Based on discussions between the authors and the instructors, the instructors' implementation of the GRF differed from that described in Sec. <ref> in two ways. First, the GRF was assigned only during the first half of the course. This choice was made because a different reflection tool was deemed more appropriate for the second half of the course. Second, GRF focus areas were assigned by the instructors. Reflections focused on collaboration during weeks 1 and 2, organization during weeks 3, 4, and 5, and resilience in weeks 6 and 7. This choice was informed in part by the anticipated progression of student-to-student relationships in the course. Students would still be getting to know their group members during the first couple weeks of the semester, and they might not feel comfortable sharing about their experiences of failure with their instructors until several weeks had passed. This choice was also informed by the fact that many introductory courses have multiple midterms, making it important to develop good time management practices as early as possible. Although students were not graded on the quality of their reflections, they were awarded a small amount of course credit for completing the GRF activity.
At the start of the fall semester, we provided Emily and Taylor with guidelines for giving feedback.<cit.> The guidelines were based on our experiences with the GRF<cit.> and a precursor to the GRF,<cit.> and they emphasized the importance of communicating the goal of the activity to students as well as providing feedback on both the structure and content of students' reflections. Based on the instructors' internal decisions about division of labor, Emily and Taylor were each solely responsible for responding to all the reflections written by students in their respective sections; their co-teachers did not provide any individualized written feedback to students via the GRF activity. In this paper, we explore the ways that Emily and Taylor incorporated the GRF into their teaching. In doing so, we aim to provide clear examples of instructor feedback, as facilitated by the GRF. These examples could inform future instructors' use of the GRF in other contexts.
§ METHODS
ThisThere are multiple ways in which the goals of the GRF, the Foundations course, and the CU-Prime organization are theoretically aligned. One line of reasoning is as follows: Seymour and Hewitt noted that weed-out culture has a “disproportionate impact on men of color and all women;"<cit.> these populations are better represented in Foundations than in the CU Physics Department as a whole (Table <ref>); the GRF was designed to counter some aspects of weed-out culture; and, finally, the Foundations course was designed to support CU-Prime's commitment to improving diversity in physics. Thus, implementing the GRF in Foundations is in alignment with the diversity-oriented mission of CU-Prime. More narrowly, the GRF directly aligns with the course goal of engaging students in the practice of reflection. However, our present focus is not on student experiences or outcomes as they relate to improving diversity in physics. Rather, we are interested in teacher practices. Accordingly, this study is a qualitative exploration of the ways that Emily and Taylor implemented the GRF in the Foundations course.
We conducted post-semester interviews with both Emily and Taylor, focusing on their goals for, perspectives on, and engagement with the GRF activity. To corroborate the instructors' self-reported response practices, we also collected and analyzed all instructor responses that were generated via the GRF. Thus, our study enables us to describe how Emily and Taylor implemented the GRF using their own words and authentic examples of the responses they provided to students. Our goal is not to make generalizable statements about either the GRF or instructors who use it, but rather to provide insight into the various ways that instructors might take up the GRF for use in their classrooms. In this section, we describe our data sources and analysis methods.
§.§ Post-semester interviews
At the end of the semester, we conducted semi-structured interviews with Emily and Taylor to gain insight into their perspectives on the GRF and other aspects of the course. Emily and Taylor were interviewed separately, each for about an hour. Interviews focused in part on threetwo themes: the instructors' perceptions about the (i) goals and framing of the GRF activity, and (ii) characteristics of good or bad feedback, and (iii) impacts of the GRF on the nature of teacher-student relationships. We chose the first twothese themes because they give us insight into why and how the instructors were using the GRF. The third theme was chosen because it reflects a goal of the Foundations course that runs counter to weed-out culture: to foster supportive relationships among students and teachers.
The first author transcribed both interviews, and the transcripts are the data that we analyzed. We collaboratively identified all excerpts that addressed the three themes that comprised the foci of our interviews. For each theme, we selected several representative excerpts and constructed two vignettes about the implementation of the GRF, one each for Emily and Taylor. These vignettes are presented in Sec. <ref>.
§.§ GRF responses
In total, 22 students were enrolled in Foundations: 10 in Emily's section and 12 in Taylor's. Each student was required to complete 7 reflections using the GRF. Of 154 possible GRF-based reflections, 135 were submitted. This corresponds to a completion rate of 88%, which is consistent with the GRF completion rate observed in another study.<cit.> The majority of students completed all or most reflections: 12 students completed all 7 reflections, 8 completed 5 or 6, and 2 completed 3 or 4. This distribution was roughly the same in both sections, resulting in similar completion rates for Emily's section (90%) and Taylor's section (86%). Each instructor responded only to reflections completed by students enrolled in their section. Almost every submitted reflection received a personalized response from either Emily or Taylor; only 1 reflection received no instructor response.
We analyzed both instructors' GRF responses using an a priori coding scheme. This scheme was not directly informed by existing frameworks for effective feedback, such as those described in Sec. <ref>. Rather, our goal was to explore these data through an analytic lens informed by the language associated with the tool itself. Hence our scheme was directly informed by the guidelines<cit.> we gave to Emily and Taylor at the start of the semester as well as a preliminary analysis of the instructors' feedback styles as self-reported during interviews. Based on our guidelines, we created code categories for normalizing statements, empathizing statements, resource suggestions, and feedback on the structure of the reflection. Based on our preliminary analysis of Emily's and Taylor's interviews, we created additional code categories for encouraging statements and strategy suggestions, respectively. Thus, our coding scheme included categories corresponding to 6 distinct types of response:
* Encouraging statements serve to motivate the student or validate their experiences and efforts. Examples include: “You're doing great," “I believe in you," “You can do this," and, “I'm glad you're using this strategy."
* Normalizing statements involve communicating to the student that what they are experiencing is normal, common, and/or unsurprising; this can be accomplished by relaying a personal anecdote or making an appeal to the general student experience. Examples include: “I experienced something similar," and “Lots of students go through this."
* Empathizing statements involve empathizing with the student cognitively, in a parallel emotional capacity, or in a reactive emotional capacity. Examples include: “I understand where you're coming from," “Your story makes me feel upset, too," and “I'm excited that you are enjoying class."
* Strategy suggestions include both direct and indirect suggestions, the latter of which may take the form of anecdotes or questions. Examples include: “You should use a day planner," “When I was in this situation, I used a day planner," and “Have you thought about using a day planner?"
* Resource suggestions also include both direct and indirect suggestions. Examples include: “You should go to office hours," “When I was in this situation, office hours were very helpful," and “Have you thought about going to office hours?"
* Feedback on reflection structure focuses on the way the student wrote their reflection—such as whether the reflection provided enough detail or articulated a goal/strategy for improvement—and may be formulated as a comment or question. Examples include: “Please write a longer reflection next week," and “How can you achieve this goal?"
Some individual statements received two codes. For example, empathizing with a student by normalizing their feelings was a common strategy for Emily, and about half of her empathizing statements were also coded as normalizing statements.In terms of the principles for feedback outlined in Sec. <ref>, categories 1 to 3 map onto principle P1, which suggests that praise focus on students' efforts and be sincere. While student data would be needed to determine the perceived sincerity of instructor feedback, normalizing and empathizing statements could contribute to such perceptions. Categories 4 to 6 map onto principle P2, which recommends that process-level feedback suggest strategies for improvement.
We coded all 134 GRF responses via the following process. The second author read through each instructor response and identified all statements that aligned with one or more categories in our coding scheme. Some individual statements received two codes. For example, empathizing with a student by normalizing their feelings was a common strategy for Emily, and about half of her empathizing statements were also coded as normalizing statements. Then, for each category, the first author read through all the coded statements to verify that they matched the category definition, making note of any statements that did not fit the category. In total, such discrepancies were identified in only 10 responses; each of these discrepancies was reconciled through discussion among both authors. While we did not analyze student reflections, we read each reflection in order to provide context for the corresponding instructor response.
An initial version of our coding scheme included categories for additional types of statements, including praise for a student's intellect, achievement, or effort, as well as instances where an instructor articulated their expectations for student behavior. However, for each of these additional categories, we found few or no corresponding statements among the GRF responses in our dataset. Hence, these categories were discarded from our analysis.
Meanwhile, each of the 6 response categories in our final coding scheme appeared in at least 22% of the 134 distinct responses (see Table <ref>). Moreover, each of the responses included at least 1 statement corresponding to our code categories, and most responses comprised multiple types of statement. Indeed, 62% of responses received at least 3 codes. This suggests that there was a good mapping between our a priori coding scheme and our dataset. Nevertheless, the scheme was not comprehensive. For example, it did not capture instances where instructors used the GRF to communicate with students about certain aspects of Foundations (e.g., clarifying when homework is due or responding to schedule conflicts between Foundations and campuswide events).
One limitation of this analysis is that, due to the small number of students in each section, it is not possible to make strong claims about an instructor's feedback style. Consider, for example, a scenario where one section has many students who engage in the GRF in a meaningful way on their own, but the other section has many students who instead engage in a only cursory way. In this scenario, the responses written by the instructor of the former section may include relatively few instances of structure feedback compared to those of the other. Hence, differences in the frequency of particular types of feedback may be due to differences in student populations, not differences in the two instructors' response styles. Therefore, when discussing results in Sec. <ref>, we use instructors' self-reported practices (i.e., interview data) to help interpret the results of our coding scheme. In addition, Emily and Taylor read a draft of this manuscript, and both instructors indicated that they felt their perspectives and practices were accurately portrayed.
In the following section, we report the results of our analyses of the interview data and the instructors' responses to the GRF.
§ RESULTS AND INTERPRETATION
A summary of our GRF response coding is provided in Table <ref>. For both instructors, encouraging statements were present in most response whereas resource suggestions were relatively sparse. In comparison to Taylor's responses, Emily's GRF responses were characterized by higher rates of encouraging, normalizing, and empathizing statements. Taylor's responses yielded higher rates of strategy suggestions. With respect to the principles for sincere praise (P1) and process-level feedback (P2), Emily's feedback contained more statements that map onto P1, and Taylor's contained more that map onto P2.
We describe each instructor's implementation of the GRF activity separately. For each instructor, we draw on interview data to paint an overarching picture of their perceptions about threetwo dimensions of their implemenations: (i) goals and framing of the GRF activity, and (ii) characteristics of good or bad feedback, and (iii) impacts of the GRF on the nature of teacher-student relationships. Then, in order to characterize the instructors' responses, we discuss the results of our GRF response coding. We focus on Emily first and Taylor second.
§.§ Emily
During her interview, Emily described a desire to bolster students' confidence through praise, and to avoid criticizing students, and foster trusting and friendly relationships with her students. Coding of GRF responses (Table <ref>) revealed that encouraging and normalizing statements were each present in most of her responses. Empathizing statements, strategy suggestions, and feedback on structure were each present in about half of her responses. Resource suggestions were the least common category among her responses.
§.§.§ Vignette: Emily's implementation
When asked about the purpose of the reflection activity, Emily said that her goal was simply “getting students to reflect." She said that it was important to give students an opportunity to reflect because reflection is “actually pretty important, but sometimes it's hard to set aside time in your day" to reflect.
Reflecting is something that you may just not do. … It's actually pretty important, but sometimes it's hard to set aside time in your day or in your life to step back and reflect on what you're doing. So I guess using [the GRF] in the class was a way to give students—like, you have to reflect on your life—almost forcing them to make time to reflect. Then maybe it can become a habit later.Emily
Emily said she hoped students would develop a habit of reflection that would help them avoid “repeating things that [they] don't necessarily want to repeat" and “engaging in some behavior that's not actually productive or helpful." When asked what she hoped her students gained from the activity, Emily said,
I guess the ability to reflect on the good things they do every week—and not necessarily the negatives—because I know [the negatives are] pretty hard to not focus on. … I hope that during those reflections, and them thinking about the good stuff that they did, helps with their confidence a little bit.Emily
Hence another of Emily's goalsEmily said that another of her goals for the reflection activity was to boost students' confidence by giving them opportunities to reflect on “the good things they do every week." Emily articulated a belief that building confidence is especially important for students from underrepresented groups studying physics:
It sucks, but you have to be fairly confident about your ability to do science if you want to succeed in science, especially as someone from an underrepresented group. I feel like those reflections are a way to potentially build confidence because you're reflecting on things you did well and where you need to improve upon, instead of just focusing on what you did poorly. In physics, it can be really easy to just think about what you did poorly. It's extra important to build the confidence of [students from underrepresented groups] because it's a lot easier to get your confidence crushed down, I think.Emily
Emily's first goal—getting students to reflect—informed how she framed the activity to students: Emily said she told students the GRF was “an opportunity for y'all to reflect on what you're doing." This message was communicated to the students verbally once at beginning of the semester and twice more over the duration of the course. Her second goal—boosting students' confidence—informed the type of feedback she provided.
When asked to comment on connections between criticism and support, Emily highlighted an important tension in her understanding of these two concepts. For her, criticism was related to pointing out areas for improvement, but she saw it in opposition to supportive feedback, which she defined as “always positive," “constructive," and requiring praise. In particular, Emily saw praise as connected to improving students' confidence:
Praise, to me, is like a confidence booster type-thing. Giving praise can give someone confidence to keep trying, or keep working hard. That's mostly it. Praise is meant to encourage people to keep up their good work. … It makes it feel like you're doing something right, you're doing something okay, and you have the ability to keep doing it.Emily
On the other hand, Emily described bad feedback as “generic," “not sincere or not personal," and/or “not necessarily positive or encouraging." She suggested that a lack of sincerity could potentially limit the positive impacts of praise.
There's probably a way that praise can also be not supportive. If it doesn't feel like it's genuine, that could be potentially not supportive praise. Usually, [praise] would be supportive, but I think it could potentially be not supportive.Emily
Emily said she preferred encouragement to criticism because she perceived critical feedback as hurtful:
I try to be encouraging even if they're doing something wrong. … [I am] never super critical. I try to be really not critical because I'm a really sensitive person, so I know that it hurts to receive critical feedback, so I avoid it all costs.Emily
She described the boundary between supportive and unsupportive critical feedback as “a fairly thin line" that she tries to “stay above, towards the supportive end." Nevertheless, she acknowledged that criticism and support “interact in very complex ways,” and that students' learning can be supported by critical feedback, or hindered by its omission.
Despite seeing value in praise, Emily recognized that providing only praise may not always be the best strategy: “If you're not critical and you constantly say, `Close! Good job!,' then they may not try to improve or learn as much." She drew connections between lack of critical feedback, insincere praise, and the nature of student-teacher relationships:
I feel like not being critical and being supportive can really enhance student-teacher relationships, but I also feel like it could potentially hurt the student-teacher relationship if you aren't being critical to the point that students start to not learn. If I'm being supportive and not critical, but then you start doing poorly in my class, you're not going to like me as much because it's like, `Why are you letting me do poorly in this class while you tell me I'm going a good job?' It's a very delicate area.Emily
Emily's description of the balance between praise, criticism, and support as a “delicate area" highlights the tension she perceived in trying to help students build confidence as learners , supportwhile supporting them to grow in their areas of weakness, and foster meaningful student-teacher relationships.
Consistent with her desire to provide supportive, constructive, and positive feedback, Emily also described a desire to establish supportive, trusting, and friendly relationships with her students. For example, when asked to describe her ideal student-teacher relationship, she said,
I feel like it would be that [students] can trust me, to come to me if they have any issues or complaints or need help with some emotional issue or something like that. That's really important to me.Emily
Moreover, one of Emily's most memorable moments from teaching Foundations involved a friendly exchange between Emily and one of her students:
One of the students came up to me [outside of class] … and said that she really, really liked the feedback and appreciated what I wrote. She wrote a lot, and I also wrote a lot. I was like, `Yeah, I feel like I'm talking to one of my friends when I'm writing your feedback.' She'd share a lot, and then I'd share, too, and I feel like I got to know her really well because of those. That was really memorable for me.Emily
Emily suggested that the GRF activity played “a big role" in helping establish similarly friendly student-teacher relationships with some other students as well. However, she also described an unanticipated barrier to the type of sharing and relationship-building she was trying to achieve:
I didn't sign my feedback. I kind of assumed [the students] would know [I was writing the feedback], but they didn't necessarily. This is something I saw in someone's reflection. They were upset that they didn't know who was reading them, because they didn't know how much they could share. … I guess that should have probably been made more explicit. `Just me and [Taylor] are reading your reflections, and that's it.'Emily
This example highlights how small details—like instructors signing their feedback—can have significant impacts on students' engagement with the GRF.
§.§.§ Coding results: Emily's responses
As can be seen in Table <ref>, almost all of Emily's responses included encouraging statements. Such statements were short, and the vast majority were exclamatory. Some were motivational in nature (e.g., “Woo! Yeah! That's the spirit!"), while others served to validate students' actions (e.g., “I'm so glad you're working on managing your time!").
The majority of Emily's responses included normalizing statements. Emily sometimes normalized a student's experience by telling a personal anecdote from her own life that mirrored the student's experience. For example, in response to a reflection in which a student described being overwhelmed by an unexpectedly heavy workload for an Organic Chemistry course, Emily said,
I had a similar experience last year. The first few weeks of my quantum mechanics class was going over such easssyyy stuff (in my opinion) … Then all of a sudden I got slapped in the face with new material I hadn't seen before and the pace of the class started speeding up; I had a horrible time trying to catch up. I eventually did though.Emily
Similarly, in response to a student who struggled with getting enough sleep and who slept through a Computer Science course, Emily said,
I can totally relate! A few weeks ago I stayed up until 3am to finish an assignment and then slept through the class the assignment was for! I definitely needed the sleep though. Since then I've been trying to plan ahead a little better to avoid such late nights.Emily
In each of these case, Emily drew upon a recent example from her time in graduate school in order to normalize her students' experiences.
For Emily, normalizing and empathizing often happened at the same time. Moreover, just as in the case of normalizing statements, some of Emily's empathizing statements also incorporated personal anecdotes. For example, one student described a particularly difficult lab activity that required knowing advanced Calculus content, with which the student was unfamiliar. The student concluded their reflection by saying, “There may have been panic and tears involved." Emily responded as follows:
Ahhh. I'm sorry for the panic and tears! I've definitely experienced the tears! That sounds like it would be very stressful. It sucks they didn't really prepare you for that lab.Emily
Emily simultaneously normalized crying while also acknowledging that the situation described by her student was unfortunate and may have been stressful. In total, about half of Emily's responses included an empathizing statement.
Emily suggested strategies in about half of her responses to students. Some of Emily's suggestions were indirect, in the form of a personal anecdote:
I use a planner to write out my tasks for the day and use [an online] calendar to keep my schedule. I have the calendar synced with my phone, so I can look at it whenever I need to and it even has reminders if I want them!Emily
In other cases, Emily was more direct. For example, one student wrote that, when they can't solve a problem, they often put themselves down. The student said they wanted to improve by no longer allowing setbacks to define how they feel about themselves. To this end, Emily responded with the following suggestion:
Try thinking of yourself as one of your friends. How would you react if someone was saying the putdowns you use on yourself to one of your friends? You should have that same reaction when you use those on yourself. `My friend isn't dumb. My friend is super awesome and can work hard to overcome this.'Emily
Emily provided feedback about the structure of students' reflections in about half of her responses. Her feedback was often in the form of a question that, if answered, would result in a reflection that addressed the GRF responses in a more comprehensive way (e.g., “Was there a specific experience that made you want to be more organized?," and “What, specifically, could you do to improve your organizational skills?"). In some cases, Emily's comments about structure were formulated as requests: “Please answer all of the questions in the reflection. Completing each question is helpful not only for the instructors but also for you."
Compared to other response types, resource suggestions were the least common for Emily. When suggesting resources, Emily typically recommended that students use on-campus tutoring services and study rooms as well as online resources. She also framed the Foundations teachers (including herself) as a resource. For example, when a student described struggling with a hard physics problem, Emily offered to help: “If you aren't completely tired of thinking about it, I'd be happy to talk to you about the roller coaster problem and what specifically is tripping you up." Consistent with her framing of the GRF activity, Emily's primary focus was on supporting students through positive, personal connections.
§.§ Taylor
During his interview, Taylor said he wanted students to learn how to reflect on themselves from an objective perspective, and that he aspired to provide concrete suggestions to students about how to improve. Coding of GRF responses (Table <ref>) revealed that encouraging statements and strategy statements were each present in most of his responses. About half of his responses included feedback on structure, about a third included normalizing statements, and about a quarter included resource suggestions. Empathizing statements were the least common category among his responses.
§.§.§ Vignette: Taylor's implementation
Taylor described three major goals for the GRF. One of Taylor's goals was for students to reflect on and improve their learning, organizational skills, and mindset: “I hope that … they get this growth mindset, and that they take away 1 to 2 learning strategies that we gave them." Throughout the interview, Taylor frequently related development of reflection skills to development of a growth mindset. Another of his goals was for students to learn how to reflect—in particular, how to do so objectively:“from an objective perspective."
It's probably more important that they learn how to reflect than how good an individual reflection is. If they're able to constantly reevaluate or take a step back from themselves and look at [themselves] from an objective perspective … they remove the frustration and emotional component out of their success. They're like, `Okay, I can do that, but only if I keep a cool head or a clear mind.' That's one of the goals of the reflections.Taylor
The third goal described by Taylor was related to the creation of an avenue for communication between students and instructors:, through which students “can voice problems" and instructors can know “what's going on with the students."
Another [goal] is that the instructors know much better what's going on with the students. Also the students have some confidential space where they can voice problems that they see.Taylor
Taylor's focus on students' ability to take an objective perspective on their own learning and his goal of creating confidential communication pathways between students and teachers informed how he framed the activity to his students:. He said that he told students that it is important be “able to take an objective perspective on yourself" and to communicate with instructors “in a very private, confidential space that is not rushed or in somebody's office."
What I basically tried to tell them is, `What this is important for is, you are able to take an objective perspective on yourself and also a perspective that makes you a better learner. Furthermore it allows you to communicate with us in a very private, confidential space that is not rushed or in somebody's office.'Taylor
Taylor said he communicated this framing to his class verbally at the start of the semester, and that he reinforced this framing throughout the duration of the course in his written feedback to students and in one-on-one conversations with students.
Taylor described good feedback as “logical," “concrete," and “to the point," whereas bad feedback was described as “confusing," “negative," and not valuing the students' effort. Taylor expressed concern about praise that focused on students' inherited traits or that was insincere. He noted that, in addition to focusing on students' effort, praise should also be tailored to the circumstances of the particular student being praised:
You should praise definitely the effort, their attitude towards working, [rather than] the things they inherited from wherever. The other thing is, if you give a lot of praise … praise no longer becomes genuine. It just becomes some sort of mechanism. You should always try to have some personal note in there. … Praise should be individual, and it should be appropriate.Taylor
Taylor also described providing feedback to help students develop specific study habits:
I also gave them strategies how to change their learning schedule. There are studies that after 45 minutes it's essentially pointless to go on, you should have a short break where your brain can regenerate. I definitely wrote that on every single reflection about learning skills. A few students actually responded the following weeks that they started doing that and saw gains in their learning.Taylor
However, heTaylor noted that it can be difficult to provide good feedback to students who write short reflections that lack specificity:
If you write something very general, then you can use very little words to describe a lot of situations. But the devil is in the details, and it's difficult to give the student appropriate feedback. … If you hit a certain low word count, you just cannot say a lot of things. Normally, low word count goes along with very general statements, and that's hard to give feedback on.Taylor
According to Taylor, vague reflections were not conducive to concrete (i.e., good) feedback. When discussing student-teacher relationships, Taylor emphasized the difference between friend and teacher:
[The students] were able to see me as an ally. Of course not friend, because I'm still their instructor or teacher. … I think this is just great, because you can have a personal relationship but still work with them as a teacher.Taylor
Consistent with his interpretation of the GRF as a communication avenue between students and instructors, Taylor said that the activity gave him access to a type of working relationship with his students that he normally doesn't have access to:
The reflections allowed me to sometimes personally address problems with the students. … This was very helpful to establish a good working relationship with the students. This is something I normally don't have access to, but now for some students I had access to.Taylor
However, Taylor was not able to use the GRF to establish this type of rapport with every student. When asked to describe something he found surprising about the course, Taylor recounted an experience with a student who wrote short reflections throughout the semester: “There was one student where I could not achieve that the student wrote long reflections." Taylor said he tried to encourage this student to write longer reflections both in his written feedback on the GRF as well as verbally during class. These efforts did not work, which surprised Taylor:
That was a little bit surprising because normally students always have a lot of things to tell, and these are the things that normally nobody talks about with them. So it was somewhat surprising.Taylor
For Taylor, not only did short reflections make it difficult to write good feedback, they were also perceived as a barrier to connecting with studentsshort or vague reflections made it difficult to write good (i.e., concrete) feedback. Indeed, upon reading a draft of this manuscript, Taylor asked us to emphasize that short reflections were “one of the worst obstacles" to productive use of the GRF.
§.§.§ Coding results: Taylor's responses
As can be seen in Table <ref>, Taylor included encouraging statements in a majority of his responses. Many of these statements were one-word exclamations (e.g., “Great!," and “Nice!"). Less often, Taylor also validated students' actions: “It is very good that you acknowledge the importance of physical and psychological well-being by taking a break from work."
A majority of Taylor's responses included strategy suggestions. Taylor's suggestions were often straightforward and direct. For example, one student described having difficulty during a group activity where some group members could not reach agreement about whether light is displaced or bent as is passes through a medium. The student said they wished they could communicate with their group. In response, Taylor recommended using an alternative mode of communication:
Have you thought about different ways to communicate? Since it was an optical phenomenon, maybe drawings could help you to communicate your thoughts, to also solidify them for yourself.Taylor
Sometimes, Taylor used anecdotes to make indirect suggestions. In response to a student who described the balance between attending class, studying, and having time to unwind as stressful and strenuous, Taylor said,
Think also about your physical and psychological well-being by giving yourself (short) breaks from studying. I myself go for a short walk every day just to clear my head and get some fresh air.Taylor
Here, Taylor's description of his own strategy—going for short walks—served as an indirect suggestion to the student.
About half of Taylor's responses included feedback on the structure of the reflection itself. Sometimes this feedback was formulated as a question (e.g., “What is a concrete strategy you want to use?"), but more often it was a direct request (e.g., “I still would like to encourage you to write more, so I can give you better feedback."). Taylor's structure feedback often focused on encouraging students to write longer and more specific reflections. Another common theme was reminding students that their reflections could focus on experiences outside of the context of the Foundations course. In the following example, Taylor provided all of these types of structure feedback:
Maybe you can expand your responses a little bit and become more concrete in the answers, e.g., what exact strategy you want to employ, or what particular obstacle you faced during the class section. Also, this reflection is not just limited to your experience in class, but about all your classes!Taylor
Sometimes, Taylor was successful in getting a student to write more; in these cases, Taylor provided feedback acknowledging the improvement (e.g., “Your reflection has significantly improved.").
Fewer than half of Taylor's responses included normalizing and/or empathizing statements. When normalizing students' experiences, Taylor typically framed those experiences as common or typical (e.g., “Many students feel that way," and “Everybody needs some outside help."). And when empathizing with students, Taylor often focused on acknowledging their feelings (e.g., “It seems that you were overwhelmed by the plethora of tasks," and “This is a pretty terrible experience you described above.").
Resource suggestions were present in about a quarter of Taylor's responses. Like Emily, Taylor also recommended that students make use of on-campus tutoring services and study rooms, online resources, and the Foundations teachers (including himself). In addition, Taylor often recommended specific books where students could find additional practice problems:
There is a plethora of really good books with various difficulty levels of problems … Please ask me for more if you think that this would be useful for you!Taylor
(Note that we have omitted from the quote the names of the particular books Taylor recommended.) In this example, Taylor not only recommended specific books as a resource, but also offered himself as a resource that the student could turn to for additional recommendations.
§ SUMMARY AND DISCUSSION
Our overarching goal for the GRF is to provide avenues through which students can receive personal attention from instructors about a variety of challenges they may experience while learning physics—interactions that are discouraged by weed-out culture.<cit.> Our vision for its use is informed by Brown's metaphor for feedback:<cit.> instructors and students “sitting on the same side of the table" while working together to improve students' learning experiences. As a step toward understanding whether and how the GRF is able to support these types of interactions, we performed an exploratory qualitative study of two graduate student instructors' implementations of the GRF in an introductory lab course whose learning goals and broader programmatic context emphasized developing students' reflection skillsand fostering supportive student-teacher relationships. This learning environment not only resonated with our vision for how the GRF might ideally be used by physics instructors, it also aligned with characteristics of environments in which students are well situated to receive and use feedback.<cit.>
Our analysis drew on two sources of data: (i) post-semester interviews with Emily and Taylor and (ii) their written responses to 134 student reflections. We found that Emily and Taylor both perceived that the GRF played an important role in the formation of meaningful connections with their students, and they each used all six of the following response types: encouraging statements, normalizing statements, empathizing statements, strategy suggestions, resource suggestions, and feedback to the student on the structure of their reflection. In particular, strategy suggestions were present in about half of Emily's responses and in most of Taylor's; this type of process-level feedback is especially effective for improving students' learning.<cit.>
Within CU-Prime, building community among graduate and undergraduate students is an explicit goal of many programs—including the Foundations course. As such, there is an inherent tension between maintaining teacher-student boundaries and cultivating friendships in Foundations. Taylor and Emily each navigated this tension differently. Both Emily and Taylor said that the reflection activity enabled them to get to know their students well and facilitated the development of personal relationships with their students that extended beyond the course context. Emily said that GRF facilitated two-directional sharing, which in turn fostered a friendship-style relationship with one of her students. In her responses, she often shared personal anecdotes as a way to empathize with students and/or normalize their experiences. Taylor, on the other hand, described a desire to establish a good working relationship with his students. He formatted his responses as if he were writing letters to his students, and he focused on suggesting concrete strategies that students could use to improve their learning habits. Thus, through the GRF, each instructor was able to establish a distinct balance between teacher and friend with which they were satisfied.
According to Seymour and Hewitt,<cit.> weed-out culture encourages students to “cast aside dependence on personally-significant adults and take responsibility for their own learning" (p. 132).." We argue that, counter to this culture, the GRF supported Emily and Taylor to simultaneously position themselves as people on whom students could depend and encourage students to take responsibility for their own learning. For example, by offering to help students solve a roller-coaster problem or suggesting books with useful practice problems, Emily and Taylor framed themselves as resources for students' coursework outside the context of Foundations. Meanwhile, the instructors aimed to promote self-regulation and confidence by suggesting strategies through which students could grow as learners, as well as providing feedback on how to engage in the act of reflection itself.
Instructors' feedback on the structure of students' reflections focused on the specificity, completeness, and length of reflections. In some cases, the instructors were successful in getting students to improve the quality of their reflections along these metrics. In other cases, however, they were not. During their interviews, both instructors said that short reflections made it difficult to write good feedback and/or connect with the student. However, we note that not all students may need or want to engage in the type of student-teacher interaction facilitated by the GRF. While we did not see evidence that the instructors were imposing the GRF on any of the Foundations students, we caution that doing so would be counter to the spirit of students and teachers (voluntarily) “sitting on the same side of the table" to tackle hard problems together. It can be difficult to know which students will respond to structure feedback and which will ignore it. The tension between improving engagement with the GRF among some students while respecting that others may not want to engage at all is an important area for future investigation.
On a related note, some students may choose to engage with the GRF in only cursory ways (or not at all) because it erroneously assumes that they experience something upon which they would like to improve on a weekly basis. The prompts may also unintentionally situate challenging experiences as failures. For example, although we did not analyze student reflections in this work, we found one excerpt from a Foundations student particularly insightful. In week 7, the student modified the GRF prompts related to resilience before reflecting on a health issue they were experiencing. At the bottom of their reflection, the student explained why they modified the prompts:
I removed all mention of the word `failure.' … I NEVER would have written this story on the original form. Why? Because it is NOT a story of failure, or even `something that I would like to improve upon' (original prompt #1). Potentially having a [health issue] is NOT a failure on my part … It's just a challenging situation.C06-7
This suggests that engagement in the GRF could be improved by making changes to both the framing and language of the activity.
We hope this paper can be a resource to physics instructors who are using the GRF or similar activities in their classrooms, especially those who are part of CU-Prime or other organizations within The Access Network and may therefore have similar learning goals and environments to Foundations. Together with previous work on structure of student reflections, this work lays the foundation for future research about the ways that feedback and reflections can be mutually informative for one another, and how the GRF impacts student-teacher interactions (short-term) and student retention (long-term). In particular, the impact of the GRF on persistence of physics students from marginalized populations is an important topic for future investigation. Such studies will become possible in the coming years, as the first cohorts of students who completed Foundations begin graduating from college. More generally, we hope that these findings inform the implementation of the GRF in courses with small student-to-teacher ratios, or in large courses with small sections led by teaching or learning assistants. The GRF could also be implemented in informal educational contexts, including academic advising or mentorship programs.
This material is based upon work supported by the NSF under Grant Nos. DUE-1323101 and PHY-1125844. Both authors contributed equally to this work.
|
http://arxiv.org/abs/1701.08666v1 | 20170127121008 | Some surprises in the neutrino cross sections associated with neutrino spin | [
"I. Sahin"
] | physics.gen-ph | [
"physics.gen-ph"
] |
[]inancsahin@ankara.edu.tr
Department of
Physics, Faculty of Sciences, Ankara University, 06100 Tandogan,
Ankara, Turkey
It is generally assumed that neutrino masses can be neglected to a
high degree of approximation in cross section calculations. This
assumption seems very reasonable since the neutrino masses are
extremely small and the neutrinos are ultrarelativistic fermions at
the energy scales of current experiments. Consequently, in cross
section calculations in the Quantum Field Theory, the Standard Model
neutrinos are frequently assumed to be described by 100% negative
helicity states. This assumption is true in a sense that in the
Standard Model processes the positive helicity states can be safely
neglected for ultrarelativistic neutrinos. On the other hand, the
assumption tacitly assert that the neutrino fields are completely
longitudinally polarized, i.e., the contribution to the cross
section coming from transverse polarization can be neglected. We
show that this tacit assertion is not correct. Although the Standard
Model cross section for a neutrino with positive helicity goes to
zero as m_ν→ 0, the cross section for a neutrino with
transverse polarization remains finite in that limit. Thus the
contribution coming from transverse polarization cannot be neglected
even in the ultrarelativistic/zero-mass limit. We examine the
consequences of this fact and deduce that it has some unexpected
results in the neutrino cross sections.
Some surprises in the neutrino cross sections associated with neutrino spin
İnanç Şahin
December 30, 2023
===========================================================================
§ INTRODUCTION
According to the Standard Model (SM) of particle physics the
neutrinos couple minimally to other SM particles only through V-A
type vertex and hence the interaction project out the left chiral
component of the neutrino field. Consequently, all interacting
neutrinos in the SM can be accepted to be left chiral which can be
written mathematically as 1/2(1-γ_5)u_ν(p)=u_ν(p)
where u_ν(p) is the spinor for the neutrino [In this
paper only Dirac neutrinos and their SM interactions have been
considered.]. It is well known that massless fermions are completely
longitudinally polarized <cit.>. They are described by pure
helicity states which coincide with chirality eigenstates. If the
mass of the fermion is zero then its positive and negative helicity
states coincide with right-handed and left-handed chirality
eigenstates respectively. Since all SM neutrinos are left chiral,
massless neutrinos must be described by 100% negative helicity
states. On the other hand, as we know from experimental results
obtained in Super-Kamiokande and Sudbury Neutrino Observatory that
neutrinos oscillate and they cannot be massless
<cit.>. Although neutrinos are not
massless they possess very tiny masses and hence they are
ultrarelativistic at the energy scales of current experiments.
Consequently, during cross section calculations it is generally
assumed (except for some direct neutrino mass measurement
experiments) that neutrino masses can be neglected and the neutrinos
are described by 100% negative helicity states. Ignoring neutrino
masses is an approximation which is believed to be valid with a high
degree of accuracy for energies much greater than the neutrino mass.
On the contrary, we will show in this paper that the approximation
is not as accurate as expected even in the zero-mass limit.
The crucial point which is generally skipped in the literature is
that the solutions of the free Dirac equation describing a general
spin orientation have a discontinuity at the point m=0. If we take
the zero-mass limit of the spinor u^(s)(p) describing a general
spin orientation we do not get, in general, its value evaluated at
m=0, i.e., lim_m→0u^(s)(p)≠ u^(s)(p)|_m=0
<cit.>. According to the seminal work of Wigner
<cit.>, strictly massless fermions are longitudinally
polarized and described solely by helicity eigenstates. However if
the fermion has a non-zero mass (no matter how small it is), then it
is allowed to have an arbitrary spin orientation which is different
from longitudinal direction. It is quite surprising that the
transverse polarization does not disappear in the zero-mass limit
but it vanishes instantly at the point m=0 <cit.>.
This behavior is the origin of the discontinuity of the free Dirac
solutions with general spin. If we restrict ourself to special type
of Dirac solutions, namely helicity states we observe that the
helicity states converge to the chirality eigenstates in the
zero-mass limit and we do not encounter any discontinuity at m=0.
However, the zero-mass behavior observed from helicity states is not
valid in general. Although the helicity states converge to the
chirality eigenstates in the zero-mass limit, a spinor with
arbitrary spin orientation does not necessarily result in a
chirality eigenstate in that limit. For instance, the spinor with
transverse polarization (relative to the direction of momentum) is
always given by a mixed chirality eigenstate and hence does not
converge to one of the chirality eigenstates left-handed or
right-handed even in the zero-mass limit [The explicit
expressions for Dirac spinors describing a general spin orientation
and their behavior in the zero-mass limit can be found in
Ref.<cit.> in detail.]. The discontinuity of the free
Dirac spinors at m=0 induces a similar discontinuity in the cross
sections. If we calculate the cross section for a neutrino with mass
m_ν and then take its m_ν→0 limit what we get is different
from the cross section in which the neutrino is initially assumed to
be massless, i.e., lim_m_ν→0σ(m_ν)≠σ(0).
As a result of this discontinuous behavior, neglecting neutrino
masses in the cross section is not a good approximation even though
neutrino masses are extremely small compared to the energy scale of
the processes that we consider.
The organization of the paper is as follows. In section II we review
the free Dirac spinors describing a general spin orientation and
their discontinuous behavior at m=0. In section III-A we present
cross section calculations in some generic SM processes, assuming
that neutrinos are massless. In section III-B the cross section
calculations are performed for massive neutrinos but the mixing
between different mass eigenstates is omitted for simplicity. In
section III-C, a more realistic situation is considered where both
neutrino masses and mixing are taken into account. In the
conclusions section (section IV) we summarize the results that we
obtain and discuss briefly some of its implications.
§ ZERO-MASS DISCONTINUITY OF THE DIRAC SPINORS
Let us review shortly the free Dirac spinors describing a general
spin orientation. Assume that in the rest frame of the fermion, its
spin is quantized along the direction defined by the unit vector
n⃗. Then, in the rest frame we can write the following
eigenvalue equations
(n⃗·S⃗)
u^(↑)_RF=+1/2 u^(↑)_RF, (n⃗·S⃗)
u^(↓)_RF=-1/2u^(↓)_RF;
where S⃗=1/2(
[ σ⃗ 0; 0 σ⃗; ])
is the non-relativistic 4×4 spin matrix. The eigenvectors
(rest spinors) which correspond to the eiegenvalues +1/2 and
-1/2 are called spin-up (↑) and spin-down (↓)
spinors respectively. The spinor for a moving fermion can be
obtained by applying a Lorentz boost to the spinor at rest. Suppose
that S' frame is moving along negative z-axis with relative
speed v with respect to the rest frame of the fermion S. Then
the observer in the S' frame sees a moving fermion with
four-momentum p^μ=(E,p⃗)=(E,0,0,p_z) and four-spin
<cit.>
s^μ=L^μ_ν(s^ν)_RF=(p⃗·n⃗/m,n⃗+p⃗·n⃗/m(E+m)p⃗)
where L^μ_ν is the Lorentz transformation tensor and
(s^ν)_RF=(0,n⃗) is the spin vector defined in
the rest frame of the fermion. Without loss of generality, choose
n⃗=sinθ x̂+ cosθ ẑ, i.e., n⃗
is in the z-x plane which makes an angle θ (polar angle)
with respect to the z-axis. Then according to an observer in S',
the spin-up (↑) and spin-down (↓) spinors
describing a general spin orientation are given by
<cit.>
u^(↑)(p)=cos(θ/2) u^(+)(p)+sin(θ/2) u^(-)(p)
u^(↓)(p)=cos(θ/2) u^(-)(p)-sin(θ/2) u^(+)(p)
where u^(+)(p) and u^(-)(p) represent positive and negative
helicity spinors which correspond to the special spin orientation
(special orientation of the spin quantization axis) n⃗=p⃗/|p⃗| or equivalently θ=0. It is
obvious from equations (<ref>) or
(<ref>) and (<ref>) that the spin-up and spin-down
spinors are interchanged under the transformation n⃗→ -n⃗ or equivalently θ→π+θ. Finally, let us stress
the simple but crucial point which is at the heart of the analysis
presented in this paper. The angle θ that appears in
equations (<ref>) and (<ref>) is not a dynamical
variable. It does not depend on the relative velocity between the
frames S and S'. It is the angle measured in the frame in which
the particle is at rest. Hence, θ is not affected by
relativistic aberration. The angle θ resembles the term
"proper time" which is a frame-independent quantity. Due to this
resemblance, we will call it "proper angle". We should also remind
the fact that when we talk about the spin orientation of a moving
fermion, we mean the orientation of the spin quantization axis n⃗ in the rest frame of the particle. Therefore the spinors
(<ref>) and (<ref>) for a general spin orientation,
describe a fermion which in its rest frame its spin is quantized
along n⃗=sinθ x̂+ cosθ ẑ.
Now we are ready to discuss the discontinuous behavior of the Dirac
spinors at m=0. Since the expressions (<ref>) and
(<ref>) for spinors with general spin orientation are
obtained by means of a Lorentz transformation from the rest frame of
the fermion S to a moving frame S', they remain valid for every
value of the relative speed v that satisfies |v-c|<ϵ
where ϵ is infinitesimal. Consequently, the zero-mass
(m→0) or ultrarelativistic (v→ c) limit of the spinors
u^(↑)(p) and u^(↓)(p) exists and given by
lim_m→ 0u^(↑)(p)=cos(θ/2) u^(R)(p)+sin(θ/2) u^(L)(p)
lim_m→
0u^(↓)(p)=cos(θ/2) u^(L)(p)-sin(θ/2) u^(R)(p)
where u^(R)(p)=lim_m→ 0u^(+)(p)=u^(+)(p)|_m=0 and
u^(L)(p)=lim_m→ 0u^(-)(p)=u^(-)(p)|_m=0 are the
right-handed and left-handed chirality eigenstates. On the other
hand, the Lorentz group is non-compact and the parameter space of
the Lorentz group does not contain the point v=c. Therefore, we
cannot perform a Lorentz transformation to the rest frame of a
massless particle. In other words, massless particles do not have a
rest frame. Consequently, the expressions (<ref>) and
(<ref>) become invalid for strictly massless
particles. In the case of massless fermions, we should employ the
little group analysis of Wigner <cit.>. According to Wigner
massless particles are described by E(2)-like little group and
that their spin orientations other than parallel or antiparallel to
the direction of momentum are not allowed. Hence, massless fermions
must be completely longitudinally polarized and described by pure
helicity states which coincide with chirality eigenstates. We
observe from equations (<ref>) and (<ref>)
that the zero-mass limit of the spinors for a general spin
orientation are not equal to a chirality eigenstate unless
θ=0 or π. Therefore the spinors u^(↑)(p) and
u^(↓)(p) have a discontinuity at m=0 that is
lim_m→0u^(s)(p)≠ u^(s)(p)|_m=0 for
θ≠0,π where u^(s)(p)|_m=0 is either
u^(R)(p) or u^(L)(p). In the case of SM neutrinos
u^(s)(p)|_m=0=u^(L)(p).
§ NEUTRINO CROSS SECTION FOR GENERAL SPIN ORIENTATION
§.§ Massless case
In the SM of particle physics, neutrinos
interact through the weak interaction. Hence any SM process which
contains the neutrinos involve W or/and Z boson exchange. The
former generates charged current and the latter generates neutral
current neutrino interactions. In both of the cases, the interaction
is proportional to the left chirality projection operator L̂=1/2(1-γ_5). Hence, the neutrinos must be left-handed
chiral in interactions. This fact is always true in the SM,
independent to whether the neutrinos are massless or not. Assume
that neutrinos are strictly massless. In this case, the neutrinos
must also be described by a pure negative helicity state. This is
evident since massless fermions are completely longitudinally
polarized, and that their positive and negative helicity eigenstates
coincide with right-handed and left-handed chirality eigenstates.
Possibly because of this reason, sometimes the terms "left-handed"
and "negative helicity" are used interchangeably in the literature
for massless neutrinos, although there are some differences in their
meaning. However one should be very careful in the case of massive
neutrinos and does not use these terms instead of each other even
though neutrino masses are extremely small.[Sometimes the
terms "left-handed" and "right-handed" are used for the eigenstates
of the helicity instead of chirality. This is a matter of convention
but the important thing is not to confuse the eigenstates of the
helicity and chirality. In this paper we use the terms "left-handed"
and "right-handed" for the eigenstates of the chirality and
"negative helicity" and "positive helicity" for the eigenstates of
the helicity.]
Let us first assume that neutrinos are strictly massless and their
flavor and mass eigenstates coincide. Consider single neutrino
production and absorption processes ab→ν c and ν a'→
b'c' where a,b,c,a',b',c' are charged fermions. The tree-level
amplitudes for these processes can be written in the form
M=g(J_C^α J'_C_α)|_m_ν=0
where J_C^α is the charged current that contains the
neutrino field and J'_C^α is the charged current for
charged fermions and g is some constant. In writing equation
(<ref>) we assume that the W propagator
can be approximated as (g_μν-q_μ
q_ν/m_w^2)/q^2-m_w^2≈-g_μν/m_w^2. The
explicit form of the charged neutrino current is given by
J_C^α =[u̅_νγ^αL̂ u_ℓ] for the process ab→ν c
J_C^α =[u̅_ℓ 'γ^αL̂ u_ν] for the
process ν a'→ b'c'
where L̂=1/2(1-γ_5) is the left chirality
projection operator. The unpolarized cross section is proportional
to the squared amplitude which is averaged over initial and summed
over final spins. In the case of single neutrino production, the sum
over final neutrino spin gives just one term with s_ν=-1 that
corresponds to negative helicity. Hence, for m_ν=0 the produced
neutrinos are completely longitudinally polarized [
The statement "completely longitudinally polarized" is used in the
meaning that the only possible spin orientation is the one which is
parallel or anti-parallel to the direction of momentum.] and
described by a state with 100% negative helicity. In the case of
single neutrino absorption, we do not perform an average over
initial neutrino spins and omit the factor of 1/2 coming from spin
average of initial neutrinos. Omitting initial neutrino spin
average is based on the assumption that all neutrinos in the SM are
described by completely longitudinally polarized negative helicity
states. This assumption is obviously true for massless neutrinos.
The neutrinos which enter neutrino absorption processes should be
produced through some production processes. If the neutrinos are
strictly massless, then all produced neutrinos through a SM process
are indeed completely longitudinally polarized and described by a
state with 100% negative helicity. Nevertheless, as we will see
in the next subsection surprisingly, the assumption underestimates
the cross section for neutrinos with non-zero mass even though
neutrino masses are very tiny.
Now let us consider another simple process the neutrino scattering
from a charged fermion, ν a→ν a. Depending on the type of
the charged fermion a, the process may contain only neutral or
both neutral and charged neutrino currents. If we consider the most
general case, the tree-level amplitude for the process can be
written in the form
M=g(J_N^αJ'_N_α+J_C^α J_C_α)|_m_ν=0
where J_N^α and J'_N^α are the neutral currents
for the neutrino and charged fermion respectively. J_C^α is
the charged neutrino current defined, similar to
(<ref>) and
(<ref>). For completeness, let us write
the explicit form of the neutral neutrino current:
J_N^α =[u̅_ν_fγ^αL̂ u_ν_i]
Here, u_ν_i (u_ν_f) represents the spinor for initial
(final) neutrino field. Similar to the single neutrino absorption,
we do not perform an average over initial neutrino spins and omit
the factor of 1/2 coming from spin average of initial neutrinos.
§.§ Massive case without mixing
Now assume that neutrino masses are not strictly zero although they
are extremely small. We also assume that flavor and mass eigenstates
of the neutrino coincide, i.e., we ignore the mixing. Then, the
polarized cross section for a process where the neutrino has a spin
orientation defined by the proper angle θ, can be obtained by
inserting the spinors (<ref>) or (<ref>) into the
relevant squared amplitudes and performing the phase space
integration. We additionally assume that the energy scale of the
process is much greater than the mass of the neutrino, E>>m_ν.
Then, it is a very good approximation to use the expressions
obtained in the m_ν→0 limit. Therefore, during calculations,
the zero-mass limit of the spinors (<ref>) and
(<ref>) can be used instead of (<ref>) and
(<ref>).
If we insert the spinors u^(↑)(p) and
u^(↓)(p) describing a general spin orientation (spin
orientation defined by the proper angle θ) into charged
neutrino current and take the m_ν→0 limit, we obtain
lim_m_ν→
0J^(↑)_C^α =sin(θ/2)(J_C^α|_m_ν=0)
lim_m_ν→
0J^(↓)_C^α =cos(θ/2)(J_C^α|_m_ν=0)
where J_C^α|_m_ν=0 is the charged current for
massless neutrinos defined in (<ref>) or
(<ref>). While calculating the spin-up and
spin-down neutrino currents in the above equations, we make use of
the following identities: L̂{lim_m_ν→
0u^(+)(p)}=L̂ u^(R)(p)=0 and L̂{lim_m_ν→ 0u^(-)(p)}=L̂
u^(L)(p)=u^(L)(p). We also use the continuity of the helicity
states at m_ν=0: lim_m_ν→
0u^(+,-)(p)=u^(+,-)(p)|_m_ν=0=u^(R,L)(p). The squared
amplitude for single neutrino production or absorption processes
ab→ν c or ν a'→ b'c' discussed in the previous
subsection is then found to be
lim_m_ν→
0|M^(λ)|^2=(1-λcosθ)/2(|M|^2|_m_ν=0)
where |M|^2|_m_ν=0 is the squared amplitude for massless
neutrinos and λ=+1 corresponds to spin-up (↑) and
λ=-1 corresponds to spin-down (↓) polarization.
We observe from (<ref>) that the squared amplitude
and consequently the cross section has a discontinuity at m_ν=0.
For instance, if we choose θ=π/2 (transverse polarization)
m_ν→0 limit of the cross section gives half of the cross
section for massless neutrinos: lim_m_ν→
0σ^(λ)|_θ=π/2=1/2(σ|_m_ν=0). The cross section for
transverse polarization remains finite in the m_ν→0 limit but
it vanishes instantly at the point m_ν=0. Let us examine the
zero-mass behavior of the cross section when the neutrinos are
described by helicity states. The negative (positive) helicity
corresponds to the choice λ=-1 and θ=0 (λ=+1
and θ=0). We see from (<ref>) that the
cross section for positive helicity goes to zero and the cross
section for negative helicity goes to σ|_m_ν=0 as
m_ν→0. Hence, if we restrict ourselves to special spin
orientations namely helicity states, we do not encounter any
discontinuity at m_ν=0. However, the zero-mass continuity
observed from helicity states is misleading and does not hold true
in general as has been clearly shown above.
The longitudinal polarization of the neutrino is usually defined as
follows
P_long=σ^(+)-σ^(-)/σ^(+)+σ^(-)
where σ^(+) and σ^(-) are the cross sections for
positive and negative helicity neutrinos. P_long was
calculated for various SM processes in the literature (for example,
see Ref. <cit.>). It was shown that
P_long goes to -1 as the neutrino mass approaches zero.
Indeed as we have discussed in the previous paragraph, according to
the squared amplitude (<ref>),
lim_m_ν→0σ^(+)=0⇒lim_m_ν→0P_long=-1. However, it is not correct to
conclude from this result that the neutrinos become completely
longitudinally polarized and described by 100% negative helicity
states in the m_ν→0 limit. This is evident since the helicity
basis is not the only basis that spans the Hilbert space of the spin
states. A transversely polarized state is given by the superposition
of positive and negative helicity states and vanishing of the
positive helicity does not require the transverse polarization to be
zero. As we have discussed, although the cross section for positive
helicity goes to zero as m_ν→0, the cross section for
transverse polarization does not go to zero, instead it approaches
half of the cross section for massless neutrinos in that limit.
Therefore, the quantity P_long defined in
(<ref>) is not the genuine measure of the
longitudinal polarization. It measures only the asymmetry between
positive and negative helicity states. If we define the quantity
which we call the degree of transverse polarization by
P_trans=σ^(T)/σ^(+)+σ^(-)
we deduce that lim_m_ν→0P_trans=1/2. Here,
σ^(T) represents the cross section for either spin-up
(λ=+1 and θ=π/2) or spin-down (λ=-1 and
θ=π/2) state of the transverse polarization.
The polarized cross section for neutrino scattering process ν
a→ν a can be calculated in a similar manner. If we insert the
spinors for a general spin orientation into neutral neutrino current
and take the m_ν→0 limit, we obtain
lim_m_ν→
0J^(λ_i,λ_f)_N^α =[(1-λ_icosθ_i)/2]^1/2[(1-λ_fcosθ_f)/2]^1/2(J_N^α|_m_ν=0)
where λ_i and θ_i (λ_f and θ_f) belong
to the initial state (final state) neutrino and
J_N^α|_m_ν=0 is the neutral current for massless
neutrinos defined in (<ref>). The squared
amplitude for neutrino scattering process ν a→ν a is then
found to be
lim_m_ν→
0|M^(λ_i,λ_f)|^2=(1-λ_icosθ_i/2)(1-λ_fcosθ_f/2)(|M|^2|_m_ν=0)
where |M|^2|_m_ν=0 is the squared amplitude for massless
neutrinos. The cross section of the neutrino-electron scattering for
polarized initial state neutrinos with general spin orientation and
unpolarized final state neutrinos, was calculated in
Ref.<cit.>. To obtain the cross section for unpolarized final
state neutrinos we should sum the squared amplitude over
λ_f, which gives:
lim_m_ν→
0|M^(λ_i)|^2=∑_λ_f=+1,-1{lim_m_ν→
0|M^(λ_i,λ_f)|^2}=(1-λ_icosθ_i/2)(|M|^2|_m_ν=0).
This squared amplitude coincides with the result of
Ref.<cit.> with only one difference that m_ν→ 0 limit
in the left-hand side of (<ref>) appears in our
calculations but it is absent in Ref.<cit.>. Instead,
spin-dependent squared amplitude was evaluated at m_ν=0, i.e.,
according to <cit.>:
|M^(λ_i)|^2|_m_ν=0=(1-λ_icosθ_i/2)(|M|^2|_m_ν=0).
It seems the authors assumed that the spinors for a general spin
orientation have a continues behavior in the massless limit, i.e.,
they assumed lim_m→0u^(s)(p)= u^(s)(p)|_m=0. We also
would like to draw reader's attention to the following point. We see
from equations (<ref>), (<ref>)
and (<ref>) that the perpendicular component
(relative to momentum direction) of the spin three-vector n⃗
does not appear in the squared amplitudes. Recall that we choose
n⃗=sinθ x̂+ cosθ ẑ and p⃗=pẑ. Therefore n⃗=(n_⊥,0,n_) where
n_⊥=sinθ and n_=cosθ. Then
equation (<ref>) can be written as lim_m_ν→
0|M^(λ_i)|^2=(1-λ_in_/2)(|M|^2|_m_ν=0).
However, the disappearance of n_⊥ in the squared amplitude
does not imply that the squared amplitude is independent from
n_⊥. This is obvious because we have a condition between
n_⊥ and n_ obtained from the normalization of
the spin four-vector s^μ s_μ=-1⇒n⃗·n⃗=n_⊥^2+n_^2=1. Therefore we have one independent
parameter representing the orientation of the spin. One may decide
to choose n_⊥ or n_ as an independent
parameter or for instance, the proper angle θ as we did in
this paper. Regardless of which parameter we choose, the cross
section for transverse polarization evaluated in the m_ν→0
limit gives half of the cross section for massless neutrinos:
n_⊥=1⇒ n_=0⇒lim_m_ν→
0σ=1/2(σ|_m_ν=0). Thus, the
production, absorption and scattering probability of the neutrinos
with transverse polarization cannot be neglected.
The zero-mass discontinuity that we have discussed has important
implications on neutrino physics. It makes a significant distinction
between the cases in which neutrinos are exactly massless and
neutrinos have non-zero but very tiny masses. In the former case,
all SM neutrinos are described by completely longitudinally
polarized negative helicity states. Therefore, the factor 1/2 due
to spin average of initial state neutrinos is omitted for processes
where neutrinos take part in the initial state. However, in the
later case it is not possible anymore to assume that neutrinos are
completely longitudinally polarized. This is obvious because, the
production cross section and hence the production probability of the
neutrinos with transverse spin orientation through SM processes
cannot be neglected. Therefore some part of the neutrinos in the SM
is transversely polarized. Consequently, the spin average of initial
state neutrinos in a process cannot be neglected and the cross
section is reduced due to this spin average. Our reasoning can be
presented in detail as follows: Consider a process in which the
neutrinos take part in the initial state. For example it might be
the neutrino absorption or scattering process. In order to calculate
the unpolarized total cross section we have to average over initial
and sum over final state spins. Some of the initial state neutrinos
are transversely polarized. Therefore for these neutrinos, spin
average is performed over spin-up and spin-down states of the
transverse polarization (FIG.<ref>). Then in the m_ν→0
limit, the unpolarized cross section gives
lim_m_ν→
0σ^(unpol)=1/2∑_λ_i=+1,-1lim_m_ν→
0σ^(λ_i)=1/2(σ|_m_ν=0)
where we use lim_m_ν→
0σ^(λ_i=+1)=lim_m_ν→
0σ^(λ_i=-1)=1/2(σ|_m_ν=0)
for θ=π/2 (transverse polarization). We see from equation
(<ref>) that the unpolarized cross section is
reduced by a factor of 1/2 compared to the cross section for
massless neutrinos. Hence, for transversely polarized initial state
neutrinos we obtain an average factor of 1/2. However, not all
initial state neutrinos are transversely polarized. Some others are
longitudinally polarized. Since the cross section for neutrinos with
positive helicity is zero in the zero-mass limit, longitudinally
polarized initial neutrino states consist of 100% negative
helicity states. In this case we do not perform an average over
initial neutrino spins and the unpolarized cross section is equal
the cross section for massless neutrinos:
lim_m_ν→
0σ^(unpol)=lim_m_ν→
0σ^(-)=(σ|_m_ν=0).
We have deduced from the above analysis that if we consider
transversely polarized part of the initial neutrinos, then 50% of
them are spin-up and other 50% are spin-down. We should then
perform an average over initial spins which gives a factor of 1/2.
On the other hand, if we consider longitudinally polarized part of
the initial neutrinos, then 100% of them are negative helicity
and none of them are positive helicity. Then we do not perform an
average and instead of 1/2 we get a factor of 1. Hence, an
important question arises: By which factor is the cross section
reduced? In order to give an answer to this question, let us
consider the following gedankenexperiment. Assume that neutrinos are
detected in a particle detector via the absorption process ν
a'→ b'c'. Without loss of generality, also assume that all the
detected neutrinos, are produced via the production processes ab→ν c. FIG.<ref> represents a schematic diagram for this
gedankenexperiment.
We will use the subscript "1" to denote the production process
ab→ν c and subscript "2" to denote the absorption process
ν a'→ b'c'. If the neutrinos are produced in a particle
accelerator then the total number of produced neutrinos is given by
N_1=σ_1^(unpol) L_1, where
σ_1^(unpol) is the unpolarized total cross section
and L_1 is the integrated luminosity. If we assume that all
of the produced neutrinos have a fix spin orientation defined by the
proper angle θ, then the number of produced neutrinos with
this spin orientation is given by
N_1^(λ)(θ)=σ_1^(λ)(θ) L_1=(1-λcosθ)/2(σ_1|_m_ν=0) L_1
where we take m_ν→0 limit and make use of equation
(<ref>). We observe from (<ref>)
that the total number of produced neutrinos is independent from the
proper angle θ:
N_1=N_1^(λ=+1)(θ)+N_1^(λ=-1)(θ)=(σ_1|_m_ν=0) L_1.
Now we consider a massive detector which is composed of a huge
number of atoms. The produced neutrinos can interact with the
electrons and nucleons (or quarks) of the detector through the
process ν a'→ b'c' and the detection occurs. For simplicity we
assume that all of the produced neutrinos are passing through the
detector. Then, the number of detected neutrinos with spin
orientation defined by the proper angle θ can be written as
N_2^(λ)(θ)=N_1^(λ)(θ)P^(λ)(θ)
=σ_1^(λ)(θ)σ_2^(λ)(θ) L_1L_2.
Here P^(λ)(θ)=σ_2^(λ)(θ)L_2 is the
detection probability of a single polarized neutrino and L_2 is a
constant that depends on the parameters of the detector. For
instance, L_2 depends on the number of electrons or nucleons per
unit volume, the fiducial mass of the detector, etc. Since the
details of the detector are irrelevant to our analysis, we do not
consider its explicit form as a function of detector parameters and
assume that it is just a constant. According to equation
(<ref>) the zero-mass limit of the production and
absorption cross sections can be written as
lim_m_ν→0σ_1,2^(λ)(θ)=(1-λcosθ)/2(σ_1,2|_m_ν=0).
The total number of detected neutrinos is then
N_2^(λ=+1)(θ)+N_2^(λ=-1)(θ)=[sin^4(θ/2)
+cos^4(θ/2)]N_2.
where N_2=(σ_1|_m_ν=0)
(σ_2|_m_ν=0) L_1L_2 is the total number of
detected neutrinos in case all produced neutrinos are massless. In
the left hand side of (<ref>), the limit
m_ν→0 is implemented but not shown.
In the above analysis we assume that all the neutrinos are produced
having the same spin orientation with respect to the direction of
momentum, i.e., with a same proper angle θ. Specifically if
we assume that all produced neutrinos are transversely polarized
(θ=π/2) then total number of detected neutrinos is N_2/2.
On the other hand, if all produced neutrinos are longitudinally
polarized (θ=0) then total number of detected neutrinos is
N_2. However in a real situation, the produced beam is comprised
from neutrinos with different spin orientations. Hence, we should
consider every possible spin orientations and an average over
different spin orientations has to be performed. It is easy to show
that the average of the trigonometric expressions in the square
parentheses yields ⟨[sin^4(θ/2)
+cos^4(θ/2)]⟩=2/3. Here we
should note that a statistical weight of sinθ is used during
the average. Therefore different from other standard model fermions,
spin average of initial state neutrino in a SM process yields a
factor of 2/3 instead of 1/2. Hence the total cross section is
reduced by this factor compared to the case in which neutrinos are
described by 100% negative helicity states. Here we should
emphasize that the standard model fermions other than neutrinos
carry electric and/or color charge and they interact dominantly
through vector type coupling. Since the vector coupling does not
provide a preferred spin orientation, all different orientations of
their spin three-vector n⃗ are equally probable unless they
are intentionally produced polarized. Therefore, for initial state
electrons, quarks, etc. the average over proper angle θ is
omitted. On the other hand, the average over spin-up and spin-down
states is performed and yields a factor of 1/2.
Let's summarize what we have done so far: We have deduced that due
to zero-mass discontinuity in the cross section the cases in which
neutrinos are exactly massless and neutrinos have non-zero but very
tiny masses, have completely different implications. Therefore,
contrary to the previously accepted opinion in the literature, it is
not a good approximation to neglect neutrino masses during cross
section calculations even though neutrino masses are very small and
the energy scale of the processes are much greater than the neutrino
mass. We have deduced a surprising result that the total cross
section of the process where a neutrino takes part in the initial
state is reduced by a factor of 2/3 due to spin average. As far as
we know, this fact has been overlooked in the literature. In the
previous studies on this subject, the spin average of initial state
neutrinos was omitted for processes where neutrinos take part in the
initial state.
The total neutrino (anti-neutrino) cross sections have been measured
in plenty number of experiments since the famous experiments of
Cowan and Reines <cit.>. In all these
experiments the measured cross sections seem not to be reduced by
the factor 2/3. They confirm the fact that neutrino states are
almost 100% negative helicity. Possibly because of the
experimental verification of the neutrino helicity, theoretical
predictions have not been examined in much detail by previous
studies. However, as we have deduced, a straightforward calculation
taking into account the existing zero-mass discontinuity of the free
Dirac spinors yields a discrepancy between quantum field theory
predictions and the experimental results. One possible solution to
this problem might be provided by adding a new simple hypothesis to
established axioms of quantum field theory <cit.>. The
scope of this paper is limited; we do not aim to discuss possible
solutions to the discrepancy. Our purpose is just to reveal the
surprising consequences of the zero-mass discontinuity of the Dirac
spinors on neutrino cross sections.
In closing to this subsection, we would like to draw reader's
attention to another surprising consequence of the zero-mass
discontinuity of the Dirac spinors. Throughout this paper, all
calculations have been carried out considering only Dirac neutrinos.
It is assumed that Dirac and Majorana neutrino cross sections
coincide in the m_ν→0 limit <cit.>.
This fact is based on the assumption that both Dirac and Majorana
spinors become completely left-handed chiral in the m_ν→0
limit. However, as we have discussed in section II, a free Dirac
spinor with arbitrary spin orientation does not necessarily result
in a chirality eigenstate in the zero-mass limit. Therefore,
contrary to expectations, Dirac and Majorana cross sections can lead
to different results even though the limit m_ν→0 is performed.
§.§ Massive case with mixing
We have so far ignore the mixing between different mass eigenstates
of the neutrino. However, in a realistic situation the neutrinos
interact through weak interaction in flavor eigenstates which are
given by a superposition of the mass eigenstates. The mixing
equation is given by ν_ℓ L=∑_i=1^3U_ℓ i ν_i
L where U_ℓ i is the Pontecorvo-Maki-Nakagawa-Sakata
(PMNS) matrix element <cit.>. Here we use the
subscript ℓ to denote the flavor and subscript i to denote
the mass eigenstates. Therefore, the scattering process for
ν_ℓ consist of separate processes for mass eigenstates
ν_i, i=1,2,3. The cross section calculations are then
performed for neutrino mass eigenstates and the contributions coming
from different mass eigenstates are added. According to the minimal
extension of the SM with massive neutrinos, the scattering amplitude
for ν_i is almost same with the amplitude for a neutrino without
mixing. The only difference is that the charged neutrino current
picks up an extra factor U_ℓ i. It is obvious that the
surprising result that we encounter in the previous subsection is
also true when we consider the processes ab→ν_ic, ν_ia'→
b'c' and ν_ia→ν_ia where the neutrinos are taken to be in
the mass eigenstate. The neutrino mixing does not solve the problem,
on the contrary, the problem becomes worse than it was before. Since
various different spin orientations of the neutrino contribute to
the cross section, we can conceive the flavor eigenstate as a
superposition of the mass eigenstates where each mass eigenstate may
have an arbitrary spin orientation. Then, the spin state of the
flavor eigenstate becomes ambiguous. One may assume the flavor
neutrino has a mixed spin state, in the sense that, each of its
constituent mass eigenstates has a different spin orientation. Let
us consider the single neutrino production or absorbtion processes
discussed in the previous subsections. If we sum the squared
amplitudes that belong to individual mass eigenstates we expect to
obtain the squared amplitude for the flavor eigenstate:
|M_ℓ|^2=∑_i|M_i|^2. According to equation
(<ref>) the sum over mass eigenstates gives:
lim_m_ν→0∑_i|M^(λ_i)_i|^2=∑_i[|U_ℓ
i|^2(1-λ_icosθ_i)/2](|M_ℓ|^2|_m_ν=0)
where (|M_ℓ|^2|_m_ν=0) is the squared
amplitude for the flavor neutrino evaluated at m_ν=0. In case
all spin orientations of the mass eigenstates are equal
(λ_1=λ_2=λ_3; θ_1=θ_2=θ_3), we
obtain the expected result:
lim_m_ν→0∑_i|M^(λ_i)_i|^2=(1-λcosθ)/2(|M_ℓ|^2|_m_ν=0)=lim_m_ν→0|M^(λ)_ℓ|^2
where we use the unitarity of the PMNS matrix. However, we do not
have any reasonable explanation for the choice
λ_1=λ_2=λ_3; θ_1=θ_2=θ_3. In
general, spin orientations of different mass eigenstates can be
different.
§ CONCLUSIONS
The helicity states have a continuous behavior in the massless
limit. When we take m→0 limit, a helicity state converge to one
of the chirality eigenstate and becomes completely left-handed or
right-handed chiral. The zero-mass behavior observed from helicity
states can make one think that massless limit is always smooth.
However, this behavior is specific to helicity states and is not
valid in general. Massless limit has some subtleties in the case of
spinors with general spin orientations. The angle which defines the
spin orientation of a fermion is an invariant quantity by
definition. Hence, the spin orientation of a fermion does not
necessarily becomes parallel or anti-parallel to the momentum
direction and does not necessarily result in a chirality eigenstate
in the zero-mass limit. This behavior makes free Dirac solutions
discontinues at m=0. We explore the consequences of this zero-mass
discontinuity of the Dirac spinors and show that it has surprising
consequences for neutrino cross sections.
The most challenging consequence of the zero-mass discontinuity is
that it yields a discrepancy between theoretical predictions and the
experimental results. We call this discrepancy the neutrino helicity
problem. The theoretical predictions of the cross section for
massive neutrinos with general spin orientation have been discussed
for decades. In this respect, many of the calculations presented in
this paper is not totally novel; the new idea of the paper, lies in
the reinterpretation of the dependence of the cross section on the
spin three-vector n⃗. Although the resultant discrepancy is
very disturbing, we decide to present our results since we think
that they are concrete predictions of the theory. The neutrino
helicity problem points out that something is wrong in the
assumptions used in the theory. The polarized cross section
calculation technique for a general spin orientation is a
conventional method which is used successfully for other fermions.
Indeed, top quark spin polarization has been measured for various
spin orientations and it was found to be consistent with the
theoretical predictions <cit.>.
Therefore, the problem should be associated with the neutrino
nature.
The author thanks Prof. A. U. Yılmazer for helpful criticism of
the manuscript and valuable suggestions.
99
Wigner E. P. Wigner, “On Unitary Representations of the Inhomogeneous Lorentz Group,” Ann. Math. 40, 149 (1939).
Fukuda:1998mi
Y. Fukuda et al. [Super-Kamiokande Collaboration],
“Evidence for oscillation of atmospheric neutrinos,”
Phys. Rev. Lett. 81, 1562 (1998)
[hep-ex/9807003].
Ahmad:2001an
Q. R. Ahmad et al. [SNO Collaboration],
“Measurement of the rate of ν_e+d → p+p+e^- interactions produced by ^8B solar neutrinos at the Sudbury Neutrino Observatory,”
Phys. Rev. Lett. 87, 071301 (2001)
[nucl-ex/0106015].
Sahin:2016bjs
I. Sahin,
“Zero-mass limit of a Dirac spinor with general spin
orientation,” Eur. J. Phys. 37, 065404 (2016)
;arXiv:1606.04116 [physics.gen-ph].
GreinerRQM W. Greiner, Relativistic Quantum Mechanics,
(Berlin: Springer, 1994).
GreinerQED W. Greiner and J. Reinhardt, Quantum Electrodynamics,
(Berlin: Springer, 1994).
Barenboim:1996cu
G. Barenboim, J. Bernabeu and O. Vives,
“Left-handed neutrino disappearance probe of neutrino mass and character,”
Phys. Rev. Lett. 77, 3299 (1996)
[hep-ph/9606218].
Kayser B. Kayser and R.E. Shrock, “Distinguishing between Dirac and Majorana neutrinos in neutral-current reactions,” Phys. Lett. B 112, 137 (1982).
Reinescowan1 F. Reines and C. L. Cowan Jr., “Detection of The Free
Neutrino,” Phys. Rev. 92, 830 (1953).
Reinescowan2 C. L. Cowan, F. Reines, F. B. Harrison, H. W. Kruse and
A. D. McGuire,
“Detection of the free neutrino: A Confirmation,”
Science 124, 103 (1956).
Sahin:2015ofl
I. Sahin,
“A hypothesis on neutrino helicity,”
arXiv:1601.00627 [physics.gen-ph].
Barranco:2014cda
J. Barranco, D. Delepine, V. Gonzalez-Macias, C. Lujan-Peschard and M. Napsuciale,
“Scattering processes could distinguish Majorana from Dirac neutrinos,”
Phys. Lett. B 739, 343 (2014)
[arXiv:1408.3219 [hep-ph]].
Bilenky S. Bilenky, Introduction to the Physics of Massive and Mixed
Neutrinos,
(Berlin: Springer, 2010).
Kayserbook B. Kayser, F. Gibrat-Debu and F. Perrier, The Physics of Massive
Neutrinos, (Singapore: World Scientific, 1989).
Abazov:2016tba
V. M. Abazov et al. [D0 Collaboration],
“Measurement of top quark polarization in t t lepton+jets final states,”
Phys. Rev. D 95, 011101 (2017)
[arXiv:1607.07627 [hep-ex]].
Abazov:2015fna
V. M. Abazov et al. [D0 Collaboration],
“Simultaneous measurement of forward-backward asymmetry and top polarization in dilepton final states from tt̅ production at the Tevatron,”
Phys. Rev. D 92, 052007 (2015)
[arXiv:1507.05666 [hep-ex]].
|
http://arxiv.org/abs/1701.07668v2 | 20170126121140 | Low Rank Magnetic Resonance Fingerprinting | [
"Gal Mazor",
"Lior Weizman",
"Assaf Tal",
"Yonina C. Eldar"
] | physics.med-ph | [
"physics.med-ph",
"cs.IT",
"math.IT"
] |
Department of Electrical Engineering, Technion - Israel Institue of Technology, Israel
Department of Electrical Engineering, Technion - Israel Institue of Technology, Israel
Department of Chemical Physics, Weizmann Institute of Science, Rehovot, Israel
Department of Electrical Engineering, Technion - Israel Institue of Technology, Israel
Purpose: Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI measures using randomized acquisition.
Extraction of physical quantitative tissue parameters is performed off-line, without the need of patient presence, based on acquisition with varying parameters and a dictionary generated according
to the Bloch equation simulations.
MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore a high under-sampling ratio in the sampling domain (k-space) is required for reasonable scanning time. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the tissue's quantitative values. In this work, we introduce a new approach for quantitative MRI using MRF, called magnetic resonance Fingerprinting with LOw Rank (FLOR).
Methods: We exploit the low rank property of the concatenated temporal imaging contrasts, on top of the fact that the MRF signal is sparsely represented in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition.
Results: Experimental results consist of retrospective sampling, that allows comparison to a well defined reference, and prospective sampling that shows the performance of FLOR for a real-data sampling scenario.
Both experiments demonstrate improved parameter accuracy compared to other compressed-sensing and low-rank based methods for MRF at 5% and 9% sampling ratios, for the retrospective and prospective experiments, respectively.
Conclusions:
We have shown through retrospective and prospective experiments that by exploiting the low rank nature of the MRF signal, FLOR recovers the MRF temporal undersampled images and provides more accurate parameter maps compared to previous iterative methods.
Low Rank Magnetic Resonance Fingerprinting
Yonina C. Eldar
==========================================
§ INTRODUCTION
Quantitative Magnetic Resonance Imaging (QMRI) is widely used to measure tissue's intrinsic spin parameters such as the spin-lattice magnetic relaxation time (T1) and the spin-spin magnetic relaxation time (T2) <cit.>. Since tissue relaxation times vary
in disease, QMRI enables the diagnosis of different pathologies, including multiple sclerosis (MS), Alzheimer, Parkinson, epilepsy and cancer <cit.>. In addition, the knowledge of tissue relaxation times allows generation of many clinical MR imaging contrasts (such as FLAIR and STIR) off-line, and saves a significant amount of scanning time.
Despite the advantages of QMRI, clinical MRI today mainly consists of weighted images. Values in weighted MR imaging are given in arbitrary units, since the signal strength is influenced by both intrinsic parameters (such as relaxation times and concentration of hydrogen atoms) and non-intrinsic ones. Non-intrinsic parameters include transmit and receive coils sensitivities, patient position in the scanner, vendor based scanner specific parameters, and local temperature. Weighted MRI images therefore lack quantitative information and as a result, different materials may exhibit similar or identical gray level values. In addition, weighted MRI contrast values vary between different follow-up scans of the same patient. This fact may impair disease monitoring, if based solely on those images. To date, weighted MRI scans are more common than QMRI in the clinic, due to the extremely long times often associated with QMRI using conventional techniques <cit.>.
A plethora of methods have been proposed for QMRI. Earlier approaches are based on a series of spin echo (SE) or inversion recovery (IR) images with varying repetition times (TR) and echo times (TE) to evaluate each magnetic parameter (T1 or T2) separately. After acquisition, the curve of intensities for each pixel is matched to the expected magnetic signal, representing the appropriate magnetic tissue parameters <cit.>. Accelerated methods for QMRI consist of the popular driven equilibrium single pulse observation of T1 (DESPOT1) <cit.> or T2 (DESPOT2) <cit.> and the IR TrueFISP for simultaneous recovery of T1 and T2 quantitative maps <cit.>.
Both techniques do not require long waiting times between excitations to reach an equilibrium state, and therefore they are significantly faster. Later works shortened the acquisition time required by those methods by under-sampling the data in both spatial and temporal domains <cit.>.
However, obtaining accurate and high resolution QMRI in a reasonable clinical scanning time is still very challenging.
An approach for QMRI called magnetic resonance fingerprinting (MRF) has drawn increased attention in the last few years <cit.>. MRF uses pseudo-randomized acquisitions to generate many different imaging contrasts, acquired at a high under-sampling ratio.
It exploits the different acquisition parameters over time to produce a temporal signature, a “fingerprint", for each material under investigation. By matching this unique signature to a pre-generated set of simulated patterns, the quantitative parameters can be extracted off-line. This approach saves valuable scan time compared to previous methods for accelerated QMRI, demonstrating promising efficient and reliable results.
MRF utilizes the fact that each tissue responds differently to a given quasi-random pulse sequence. By varying the acquisition parameters (e.g. repetition time (TR), echo time (TE), and radio frequency flip angle (FA)),
unique signals are generated from different tissues. After acquisition, a pattern recognition algorithm is used to match the acquired signal from each voxel to an entry from a dictionary of possible tissue candidates. The dictionary entries are created by simulating the tissue's response to the sequence for a range of T1 and T2 parameter values, using the Bloch equations. The resulting dictionary contains the temporal signatures of various simulated materials, given the pseudo-random pulse sequence. The quantitative parameters, such as the tissue's T1 and T2 relaxation times, can be retrieved from the data by matching the signature acquired to the most correlated entry in the dictionary.
In MRI, data is acquired in the Fourier domain of the spatial image (a.k.a. k-space). The acquisition time of a high resolution, single contrast 3D MRI lasts a substantial amount of time. Since MRF is based on rapid acquisition of hundreds of different contrasts, severe under-sampling is performed in k-space to obtain the temporal resolution required for MRF. Figure <ref> demonstrates the effect of fully sampled versus under-sampled data, acquired with spiral trajectories and recovered using the inverse non uniform fast Fourier transform (NUFFT) <cit.>. It can be seen that the under-sampled data is blurred and introduces aliasing artifacts. Figure <ref> illustrates the noise and under-sampling artifacts of a representative brain voxel intensity as function of time, where the data is acquired with an MRF sequence based on fast imaging with steady state precession (FISP) <cit.>. Clearly, under-sampling also introduces a substantial level of noise in the time domain. In addition, MRF uses a dictionary with discrete values, while QMRI values are continuous. This leads to quantization error, depending on the values represented in the dictionary.
While in the original MRF paper <cit.> these imaging artifacts are not handled explicitly,
recent works have implemented advanced reconstruction techniques to overcome under-sampling artifacts.
Approaches based on exploiting the sparsity of the signal in some transform domain in a compressed sensing (CS) <cit.> framework are examined by Davies et al. <cit.> and Wang et al. <cit.>. Zhao et al. <cit.> formulated MRF as a maximum likelihood (ML) problem, and developed an iterative reconstruction approach for each time point. While these techniques showed improved results compared to the original MRF method, they do not exploit the temporal similarity between adjacent time-points, which is intrinsic to the dynamic acquisition used in MRF.
A common approach to exploit redundancy exists in dynamic MRI, based on modeling the acquired data as low-rank. This modelling was successfully applied for various dynamic MRI applications, such as cardiac imaging <cit.> and functional MRI <cit.>. In the context of MRF, early works use low-rank MRF to compress the dictionary for faster reconstruction <cit.>. This saves reconstruction time, but does not necessarily improve the quality of the reconstructed maps or the acquisition time. The first introduction of a low-rank constraint for improved reconstruction in MRF was proposed by Zhao et al. <cit.> followed by a sub-space constrained low-rank approach introduced by us <cit.>. Extensions of these ideas include adding a sparse term to the low-rank-based reconstruction <cit.> (a.k.a robust PCA <cit.>) and representing the data as low-rank in the k-space domain <cit.>. Recently, a few approaches that utilize prior knowledge of the dictionary together with a low-rank constraint have been published. Zhao et al. <cit.> presented an efficient algorithm that performs a singular value decomposition (SVD) on the dictionary and embeds the right singular vectors in the solution, to obtain better estimation of the temporal signatures. A similar approach was presented by Assländer et al. <cit.>, who embed the left singular vectors in the solution. These methods show that exploiting the redundancy via a low-rank based solution improves the results compared to a sparsity approach. However, the obtained reconstructed maps still suffer from quantization error, due to the nature of a matched-filter based solution that matches a single dictionary atom to a single pixel. In addition, most of these methods are based on a fixed rank, set in advance, which may be difficult to determine in advance.
In this paper we extend our initial idea presented in our conference paper <cit.> and enforce a low-rank constraint in the image domain together with constraining the solution to the dictionary subspace. In particular, we exploit the low-rank property of the temporal MRF domain, via an iterative scheme that consists of a gradient step followed by a projection onto the subspace spanned by the dictionary elements in order to constrain the structure of the tissue behaviour simulated in the dictionary. The estimated images are then decomposed using SVD and the singular values are soft-thresholded to reduce the nuclear norm value in every step. Our approach, called magnetic resonance Fingerprinting with LOw Rank constraint (FLOR), incorporates three main advantages that were only partially introduced in previous work:
* FLOR formulates the problem as a convex problem. The solution is then rigorously developed based on the incremental subgradient proximal method <cit.>. This technique is known to convergence to the global minimum, regardless of the initial starting point.
* FLOR is based on a nuclear-norm solution, and does not require fixing the rank in advance. This leads to a solution that adapts the rank according to the nature of the specific dataset.
* The subspace constraint in FLOR is not limited to dictionary items, but rather allows a solution that is spanned by the dictionary. This enables better reconstruction of the temporal imaging contrasts. It also allows generation of quantitative parameters that do not necessarily exist in the simulated dictionary, thereby reducing the quantization error of the resulting maps.
While there are previous publications that introduce one or two of the advantages pointed above (e.g. Zhao et al. <cit.> describes a subspace constraint that is not limited to the dictionary items), our work incorporates all of them together in a convenient optimization framework.
Our reconstruction results are based on sampling with variable density spiral trajectories, using 5% and 9% sampling ratios, for retrospective and prospective experiments, respectively. We compare our results to the methods developed by Davies et al. <cit.> and Zhao <cit.>, and show that FLOR provides quantitative parameter maps with higher accuracy or correspondence to literature compared to those methods.
This paper is organized as follows. Section <ref> describes the MRF problem and provides a review of common reconstruction methods, followed by our low-rank based approach. Section <ref> compares our results to previous MRF algorithms, using retrospective and prospective under-sampled MRF data of a human subject. Sections <ref> and <ref> discuss experimental results using retrospective and prospective under-sampled MRF data, followed by conclusions.
§ METHOD
§.§ Problem formulation
MRF data consists of multiple frames, acquired in the image's conjugate Fourier domain (a.k.a k-space), where each frame results from different acquisition parameters. We stack the measurements into a Q × L matrix 𝐘, where L is the number of frames and Q is the number of k-space samples in each frame. Every column in 𝐘 is an under-sampled Fourier transform of an image frame, 𝐗_:,i:
𝐘=[F_u{𝐗_:,1},...,F_u{𝐗_:,L}]+𝐇
where F_u{·} denotes an under-sampled 2D Fourier transform and 𝐇 denotes a zero mean complex Gaussian noise. The row 𝐗_j,: represents the temporal signature of a single pixel (assumed to correspond to a single tissue). The signature depends on the tissue's relaxation times, T1 and T2, and its proton density (PD), grouped as a row vector:
Θ_1^j=[T1^j,T2^j,PD^j], 1≤ j≤ N.
Each column, 𝐗_:,i represents a response image acquired at a single time point with different acquisition parameters, stacked as a column vector:
Θ_2^i=[TR^i,TE^i,FA^i]^T, 1≤ i≤ L
where TR and TE are the repetition time and time to echo and FA represents
the flip angle of the RF pulse. Therefore, 𝐗_j,:=f(Θ_1^j,Θ_2), where f{·} represents the Bloch equations.
Note that we omit the off resonance parameter (which appeared in Θ_1 in the original MRF paper <cit.>), since the sequence used in our retrospective experiments is derived from the FISP sequence, which is insensitive to off resonance effects <cit.>.
The goal in MRF is to recover, from the measurements 𝐘, the imaging contrasts 𝐗 and the underlying quantitative parameters of each pixel defined in (<ref>), under the assumptions that every pixel in the image contains a single type of tissue and that Θ_2 is known.
Recovery is performed by defining a dictionary that consists of simulating the signal generated from M tissues using the Bloch equations (represented as M different combinations of T1 and T2 relaxation times), when the length-L acquisition sequence defined in (<ref>) is used.
As a result, we obtain a dictionary 𝐃 of dimensions M × L (M>L as the number of simulated tissues is greater than the sequence length). The PD is not simulated in the dictionary, as it is the gain used to match the Bloch simulation performed on a single spin to the signal obtained from a pixel containing multiple spins. It can be easily determined after the T1 and T2 maps are known. After successful recovery of 𝐗, each row in 𝐗 is matched to a single row in the dictionary, and T1 and T2 are estimated as those used to generate the matched dictionary row. Each dictionary signature has its own unique T1 and T2 values stored in a look up table (LUT), represented as the matrix 𝐋𝐔𝐓 of dimensions M×2.
§.§ Previous Methods
The approach suggested in the original MRF paper <cit.> is described in Algorithm <ref>, and uses matched filtering to match dictionary
items to the acquired data. In the algorithm, F^H{·} is the 2D inverse NUFFT operator.
The parameters k_j are the matching dictionary indices, j is a spatial index and i is the temporal index, representing the ith frame in the acquisition.
The parameter maps are extracted from 𝐋𝐔𝐓, which holds the values of T1 and T2 for each k_j.
This approach does not incorporate sparse based reconstruction, which has been proven to be very successful in MRI applications based on under-sampled data <cit.>.
Davies et al. <cit.> suggested a method incorporating sparsity of the data in the dictionary domain (i.e. each pixel is represented by at most one dictionary item), referred to as the BLoch response recovery via Iterative Projection (BLIP) algorithm. This approach is based on the Projected Landweber Algorithm (PLA) which is an extension of the popular iterative hard thresholding method.
BLIP (described here as ) consists of iterating between two main steps: A gradient step that enforces consistency with the measurements, and a projection that matches each row of 𝐗 to a single dictionary atom.
BLIP and a few other works that are based on it <cit.> do not incorporate the temporal similarity across time points, which is a fundamental characteristic of the MRF sequence. In addition, there is a high degree of similarity across signatures in 𝐃. As a result, the imaging contrasts matrix 𝐗 is typically a low-rank matrix.
Low-rank based modelling for dynamic MRI in general <cit.> and MRF in particular <cit.>-<cit.> has been proposed in the past.
To demonstrate the low-rank property of 𝐗, we used T1, T2 and PD maps of size 128×128 (acquired using DESPOT <cit.>) as an input to a simulation of a FISP sequence <cit.>, using L=500 TRs. In addition, we used random TR and FA values that have been used in previous publications in the field of MRF <cit.>. Note that the general assumption of 𝐗 being a low-rank matrix holds as long as temporal similarity exist between time-frames in 𝐗, and multiple voxels in the image belong to a single tissue, regardless of the specific acquisition parameters. Figure <ref> shows the singular values of 𝐗. It can be seen that 𝐗 is indeed low-rank, as most of the data is represented in the highest 15 singular values.
This low-rank property of 𝐗 can be exploited for improved reconstruction using the following optimization problem:
𝐗,𝐑minimize 1/2iΣ‖𝐘_:,i-F_u{𝐗_:,i}‖ _2^2
subject to rank(𝐗)≤ r
𝐗=𝐑_1𝐃
where 𝐑_1 is a matrix that matches each pixel (𝐗_j,:) with the dictionary signatures. In many previous implementations of low-rank for MRF, a matching of a single dictionary atom to a single pixel is enforced, which means that the rows of 𝐑_1 are one-sparse vectors. The parameter r is the rank of the matrix, and is usually defined as a fixed pre-chosen parameter. Typically r is not known in advance and determining it upfront arise difficulty and may add error to the reconstruction scheme.
Zhao et al. <cit.> suggested an approximation for problem (<ref>), using an ADMM formulation <cit.> as follows:
𝐗^n+1,𝐑_1^n+1,𝐙^n+1 =
𝐗,𝐑_1,𝐙arg min1/2iΣ‖𝐘_:,i-F_u{𝐗_:,i}‖ _2^2+λψ(𝐙)+η_1‖𝐐^n- 𝐗+𝐑_1𝐃‖ _F^2
+η_2‖𝐖^n-𝐗+𝐙‖ _F^2
𝐐^n+1 =
𝐐^n+η_1(𝐗^n+1-𝐑_1^n+1𝐃)
𝐖^n+1 =
𝐖^n+η_2(𝐗^n+1-𝐙^n+1)
where the low rank constraint is applied via the function ψ(𝐙), defined as the p norm (p<1) of the singular values of 𝐙 to the power of p.
The matrices 𝐐 and 𝐖 are the Lagrange multipliers.
The algorithm, coined Model Based Iterative Reconstruction MRF (MBIR-MRF) <cit.> is described in .
§.§ Proposed Method
The constraint presented in previous approaches <cit.> on 𝐑_1 to have one sparse rows that contain the corresponding PD values for each row of 𝐗, is justified by the assumption that only a single dictionary item should match an acquired signature. However, in practice, we found that superior results (in terms of spatial resolution and correspondence to ground truth) are obtained by relaxing this constraint, and allowing 𝐗 to be comprised of multiple dictionary elements at each step of the optimization algorithm, where at the final stage each voxel is matched to a single tissue.
This allows for non-simulated signatures to be described by a linear combination of simulated ones. In addition, the relaxation enables formulating the problem as a convex problem, and saves the pattern recognition search time during reconstruction. The matching between 𝐗 and the dictionary is done only at the final stage, after 𝐗 is fully recovered by using a matched filter (MF), in order to extract the parameter maps. For brevity we write the constraint 𝐗=𝐑𝐃 as 𝐗∈𝔻 where 𝔻={X:𝒩(X)⊇𝒩(D)}
, and we consider the next regularized form:
𝐗 ∈𝔻minimize 1/2iΣ‖𝐘_:,i-F_u{𝐗_:,i}‖ _2^2+λrank(𝐗)
for some fixed regularization parameter λ.
Problem (<ref>) is not convex due to the rank constraint. We therefore relax this constraint by replacing the rank of 𝐗 with the nuclear norm ‖𝐗‖ _*, defined as the sum of the singular values of 𝐗 <cit.>. This results in the relaxed problem:
𝐗 ∈𝔻minimize 1/2iΣ‖𝐘_:,i-F_u{𝐗_:,i}‖ _2^2 +λ‖𝐗‖ _*.
In order to solve (<ref>) we use the incremental subgradient proximal method <cit.> as described in .
Due to the convex modelling of the problem, we also introduce an improvement that significantly reduces convergence time. The improvement uses the acceleration approach suggested by Nesterov <cit.> for minimizing a smooth convex function, and its extension for non smooth composite functions of Beck and Teboulle <cit.>. Our final algorithm is detailed in and referred to as magnetic resonance Fingerprint with LOw Rank (FLOR), where the parameter λ is chosen experimentally. Note that by setting λ=0, enforcing 𝐑 to have one-sparse rows and eliminating the acceleration step, FLOR reduces to BLIP <cit.>.
Figure <ref> shows the reconstruction error of FLOR as the number of iterations varies with and without the acceleration step. Note that the CPU time of both algorithms is similar.
§.§ Possible extension
Conventional MRF algorithms use MF for the magnetic parameter extraction. MF introduces a quantization error since map values are continuous, as opposed to discrete dictionary values. A possible extension of FLOR is to add values to the dictionary by linear interpolation, in regions where a few candidates from the dictionary match a single signature from the data. We then select the dictionary signatures that exhibit a high correlation value (the ones above a certain threshold) and average their matching T1 and T2 values.
This improvement expands the possible solutions to include ones that do not exist in the dictionary, and therefore exhibits improved accuracy compared to the conventional matching. The major benefit from this extension is reduced quantization errors that arise from conventional MF used in MRF. This extension, coined FLOR II, is examined in the first part of our experimental results in the next section.
§ EXPERIMENTAL RESULTS
This section describes two MRI experiments that were carried out using brain scans of a healthy subject. The first experiment is based on well known quantitative maps that were used, in a purely simulation environment to generate an MRF experiment with retrospectively sampled data. While this experiment is a simulation based on real quantitative maps, it allows accurate comparison of the results of the different algorithms using a well defined reference.
In the second experiment, we used prospective sampled real MRF data that was used to generate the results in Ma et al. <cit.>. While this experiment lacks a gold-standard for accurate error evaluation, it allows comparison between different algorithms in a realistic multi-coil acquisition. To compare between different algorithms, for prospective sampling, where no ground truth is available, we examined the performance of the various algorithms as a function of the total number of excitations, where correspondence to values provided in literature for various brain tissues is used for validation. In both experiments, variable density spiral trajectories were used for sampling.
For quantitative error analysis, we calculated the normalized MSE (NMSE) between each quantitative map estimation and the reference map, defined as:
NMSE_i= ||θ_i-θ̂_i||_F^2/||θ_i-1/Nj∑θ_i^j||_F^2
where θ_i, θ̂_i represent a reference map (such as T1,T2 or PD) and its corresponding reconstructed map (respectively), N is the number of pixels in the map and j is a spatial index.
In the first experiment, forward and inverse non-uniform Fourier transforms were applied using SPURS, which is a fast approach published recently <cit.>. For the second experiment, we used the NUFFT package <cit.>, to adhere with the reconstruction results of the original MRF paper <cit.>.
§.§ Experiment 1: Retrospective undersampling of real data
The data for this experiment was acquired with a GE Signa 3T HDXT scanner. The
procedures involving human subjects described in this experiment were approved by the Institutional Review Board of Tel-Aviv
Sourasky Medical Center, Israel. We generated our reference data by acquisition of Fast Imaging Employing Steady-state Acquisition (FIESTA) and Spoiled Gradient Recalled Acquisition in Steady State (SPGR) images, at 4 different flip angles (3^∘ ,5^∘,12^∘ and 20^∘), implementing the fast and well known DESPOT1 and DESPOT2 <cit.> algorithms, after improvements as described in Liberman et al. <cit.>, to generate T1,T2 and PD quantitative maps, each of size 128 × 128 pixels. While it is well known that the gold standard method for T1 measurement is the inversion recovery spin echo with varying TIs and for T2 measurement is the spin echo sequences with varying TEs, in this experiment DESPOT was used as a reference thanks to its availability and its relatively fast acquisition time. The FISP pulse sequence has been applied for simulating acquisition of the reference. It was simulated with constant TE of 2ms, random TR values in the range of 11.5-14.5 ms, and a sinusoidal variation of FA (RF pulses) in the range of 0-70 degrees <cit.>.
To simulate noisy undersampled MRF samples, we added complex Gaussian zero-mean noise to the k-space data to obtain an SNR of 67dB in the undersampled measurement domain. Data was then under-sampled to acquire only 876 k-space samples in each TR with spiral trajectories. In particular, we used 24 variable density spirals with inner region size of 20 and FOV of 24. In every time frame, each spiral is shifted by 15 degrees. Figure <ref> demonstrates the first spiral trajectory. We define the under-sampling ratio by the number of the acquired samples in the k-space domain divided by the number of pixels in the generated image. This leads to an undersampling ratio of ˜5% in this experiment. For comparison, the under-sampling ratio of the original MRF paper <cit.> is ˜9%, since for each single spiral 1450 data points were acquired.
We generated the dictionary using Bloch equations, simulating T1 values of [100:20:2000,
2300:300:5000] ms and T2 values of [20:5:100,110:10:200,300:200:1900] ms. This range covers the relaxation time values that can be found in a healthy brain scan <cit.>. The tuning parameters were experimentally set as μ=1 and λ=5, after λ was tested in the range between 0 and 30.
Data was fed as an input to BLIP, MBIR-MRF and the improved FLOR algorithm (described as Algorithms 2,3 above and Algorithm 4 with the additional extension of interpolating the parameter maps). In addition, we performed reconstruction using 100% of the data (without the addition of noise) via conventional MRF (), for comparison purposes and to evaluate the error caused by the effect of discretized dictionary. All the iterative algorithms were run until the difference between consecutive iterations was below the same threshold.
The MATLAB code for reproducing the experiment provided in this section can be be found at: http://webee.techni- on.ac.il/Sites/People/YoninaEldar/software_det18.php. In this code, spiral sampling trajectories design was based on Lee et al. <cit.>.
Figure <ref> shows the resulting maps for the recovery of T1, T2 and PD obtained with the various algorithms against the reference (left). The corresponding error maps of each method versus the reference are shown in Fig. <ref>. To allow detailed view of the reconstruction results for the reader, Fig. <ref> shows a zoomed region for each map.
It can be seen that both FLOR and MBIR-MRF outperform BLIP reconstruction results, when using 5% of sampled data by utilizing the low rank property. In addition, FLOR provides a lower error compared to MBIR-MRF. The details in the FLOR maps are comparable to those obtained by the original MRF algorithm using 100% of the noise-free data.
Due to the very low sampling ratio in our experiments (measured as the number of samples divided by the number of pixels in the image), conventional MRF using 5% of the data did not provide valuable reconstruction results and is therefore omitted in this analysis.
We next implemented the MF improvement described in Section II.D. The results are shown in Fig. <ref>, with corresponding error maps in Fig. <ref>. These figures compare the recovery maps of FLOR without (FLOR I) and with (FLOR II) the proposed improvement.
It can be seen that FLOR II improves the results of FLOR I and produces a smoother solution which better fits the reference maps.
§.§ Experiment 2: In vivo prospective sampling experiment
The experiment in this section was carried out using the data of the original MRF paper <cit.>,
which consisted of 48 spiral trajectories shifted by 7.5 degrees, where 1450 samples were acquired in each trajectory, leading to an underampling ratio of 9%. The data was acquired on a 1.5-T whole body scanner (Espree, SIEMENS Healthcare) using a 32-channel head receiver coil.
Due to the lack of gold standard maps for this data, we are unable to provide quantitative error results (e.g. NMSE). Therefore, in this experiment we compare between the various algorithms by examining reconstruction results using 400 TRs (representing 40% of scanning time),
to quantitative values of brain tissues from the literature. Since the results obtained in the original MRF experiment (using 1000 TRs) mostly correspond to quantitative values from the literature, the maps generated using 1000 TRs using the original MRF algorithm are provided in Fig. <ref>, for reference.
The results of T1, T2 and PD maps for BLIP, MBIR-MRF and FLOR appear in Fig. <ref>. Since IR-bSSFP sequence has been used in this experiment, off-resonance frequency has also been computed and shown. We used 109 different values in the range between -250 and 240 Hz.
It can be seen that for T1, all iterative algorithms provide similar results, and T1 values of grey matter (GM), white matter (WM) and cerebrospinal fluid (CSF) regions correspond to similar values that appear in the literature (see Table 1 in Ma et al.<cit.>) and in Fig. <ref>. While T2 results exhibit visible differences between the various methods, WM and GM values for all methods correspond to values that appear in the literature. However, both BLIP and MBIR MRF exhibit T2 values for CSF that are lower than those reported in the literature. This can be seen in Fig. <ref>, where the color scale for T2 is adjusted to 500-2000ms (T2 values for CSF are around 2000ms). Shortened T2 values in CSF were also reported in the original MRF experiment with 1000 TRs (and were justified as out-of-plane flow in this 2D experiment). In our case, using the same acquired data, it can be seen in Fig. <ref> that FLOR provides CSF values that better correspond to literature values, when compared to the other methods.
§ DISCUSSION
§.§ Relation to previous works
Although works that exploit the low rank structure of MRF sequences have been published in the past by others and also by us <cit.>, our solution is unique mainly in the combination of convex modelling and the ability to enable a solution with quantitative values that do not exist in the dictionary. Our solution is based on soft-thresholding the singular values <cit.>, which is mathematically justified in .
Moreover, we compare our algorithm to both CS-based and low-rank based methods for MRF and demonstrate superior results. While BLIP treats the original MRF problem as an ℓ_0 optimization problem, FLOR first solves the relaxed problem of (<ref>) and only then uses MF to extract the magnetic parameters. It leads to some beneficial properties such as convergence guarantees, and the ability to use the acceleration step as described in , which is also novel in MRF reconstruction methods.
§.§ Computational complexity
FLOR is divided into two main components: The first recovers the imaging contrasts, and the second extracts the parameter maps from the recovered contrasts.
The computational burden of FLOR lies in the low-rank projection step, or specifically, in the SVD calculation. This step does not exist in BLIP nor the original MRF reconstruction. However, there are efficient fast techniques to calculate the SVD <cit.>, required by FLOR. Moreover, unlike BLIP, and other low rank based algorithms such as MBIR-MRF, FLOR does not require the pattern recognition calculation at every iteration.
Another time consuming step that exists in all algorithms is the non uniform Fourier transform. By using the acceleration step, FLOR reduces significantly the number of iterations required for convergence and therefore saves computational cost.
In addition, while previous implementations of CS-based reconstruction algorithms mainly use the inverse NUFFT (iNUFFT) algorithm, in our retrospective experiemtns we use SPURS <cit.>. Based on our observations, SPURS provides improved image reconstruction with the same computational complexity compared to iNUFFT.
§ CONCLUSIONS
We presented FLOR, a method for high quality reconstruction of quantitative MRI data using MRF, by utilizing the low-rank property of MRF data. Due to the fact that we exploit low-rank on top of the well known sparsity of MRF in the dictionary matching domain, we are able to obtain high quality reconstruction from highly under-sampled data. Our method is based on a convex minimization problem, leading to a solution in the dictionary subspace that overcomes its quantization error.
We provide results that are comparable to fully sampled MRF, using only 5% of the data in a simulation environment. In addition, comparison against CS-based and low-rank based methods for MRF shows the added value of our approach in generating quantitative maps with less artifacts. Our results also consist of real-data, in-vivo experiments that exhibit FLOR superiority also for realistic multi-coil data acquisition. Future work will examine more sophisticated patch wise recoveries.
§ ACKNOWLEDGEMENTS
The authors wish to thank the Tel Aviv center for brain functions at Tel Aviv Sourasky Medical Center for providing the data required for experiment 1. We also wish to thank Dan Ma and Prof. Mark Griswold for providing the real data used in their experiments. This work was supported by the
Ministry of Science, by the ISF I-CORE joint research center of the
Technion and the Weizmann Institute, and by the European Union’s
Horizon 2020 research and innovation programme under grant agreement
No. 646804-ERC-COG-BNYQ, Assaf Tal acknowledges the support of the Monroy-Marks Career Development Fund, the Carolito Stiftung Fund, the Leona M. and Harry B. Helmsley Charitable Trust and the historic generosity of the Harold Perlman Family. The authors have no relevant conflicts of interest to disclose.
§
The basic implementation of FLOR, as described in in the paper, aims to solve the following optimization problem:
X∈𝔻argmin1/2iΣ‖𝐘_:,i-F_u{𝐗_:,i}‖ _2^2+λ||X||_*
where F_u is the partial Fourier transform operator, X has dimensions N^2 × L and 𝔻={X:𝒩(X)⊇𝒩(D)}.
FLOR solves (<ref>) using the incremental proximal method <cit.>, which treats problems of the form:
X∈𝔻argmin {Σ_i^m F_i(X)}
where F_i(X)=f_i(X)+h_i(X). The function f_i(X) is convex and non-differentiable, h_i(X) is a convex function and 𝔻 is a non-empty, closed, and convex subspace. The general step in solving (<ref>) is given by [27, (4.12)-(4.13)]:
Z^k=P_𝔻(X^k-μ_k g_i_k)
X^k+1=X∈𝔻argmin f_i_k(X)+1/2μ_kX-Z^k_F^2
where g_i_k∈∂ h_i_k(X^k), μ_k is a positive step size, and P_𝔻 is the projection operator onto 𝔻 defined as
P_𝔻(X) = Z∈𝔻argminZ-X_F^2.
The optimization problem, defined in the update step of X^k+1, is referred to as the proximal gradient calculation of the non-differentiable f_i_k, under the constraint X∈𝔻.
Our problem in (<ref>) corresponds to m=1 in (<ref>) and
h(X) =1/2iΣ‖𝐘_:,i-F_u{𝐗_:,i}‖ _2^2=1/2Y-F_u{X}_F^2
f(X) =λX_*.
Therefore,
∂ h(X) =F_u^H{Y-F_u{X}},
and,
P_𝔻(X) =XD^†D=XP.
The solution of (<ref>) for f(X)=λX_* without the constraint X∈𝔻, is the singular value soft-thresholding operator (SVT) <cit.> defined as:
SVT_μ_kλ(Z^k)=U_r[Σ_r-μ_kλI]_+V_r^H.
Here Σ_r is a diagonal matrix with the non-zero singular values of Z^k on its diagonal, U_r and V_r are the r left and right singular vectors of the SVD of Z^k, associated with the r non-zero singular values, and [x]_+=max(0,x).
In our case, since Z^k∈𝔻 (as follows from (<ref>)) and the SVT calculation keeps the operand in the same subspace, the constraint X∈𝔻 can be omitted. Therefore, (<ref>) reduces to
X^k+1=U_r[Σ_r-μ_kλI]_+V_r^H.
Combining (<ref>), (<ref>) and (<ref>), the incremental subgradient-proximal method for solving (<ref>) consists of two updates in each iteration:
Z^k=(X^k-μ_kF_u^H{Y-F_u{X^k}})P
X^k+1=U_r[Σ_r-μ_kλI]_+V_r^H.
This constitutes the core of Algorithms 4. In our framework, the step sizes are set to constant, μ_k=μ, and λ is chosen experimentally.
|
http://arxiv.org/abs/1701.07748v2 | 20170126155240 | Packing and covering odd cycles in cubic plane graphs with small faces | [
"Diego Nicodemos",
"Matěj Stehlík"
] | math.CO | [
"math.CO"
] |
matrix,arrows,calc
theoremTheorem[section]
lemma[theorem]Lemma
observation[theorem]Observation
proposition[theorem]Proposition
corollary[theorem]Corollary
conjectureConjecture
definition
definition[theorem]Defintion
equationsection
figuresection
tikzgraph
[vertex/.style=circle, draw=black, fill, inner sep=0mm,
minimum size=3pt,edge/.style=semithick,
subdivision/.style=circle, draw=black, fill=white, inner sep=0mm,
minimum size=3pt,edge/.style=semithick]
Odd cycles in cubic plane graphs]Packing and covering odd cycles in cubic plane graphs with small faces
The first author was partially supported by CAPES and CNPq.
Colégio Pedro II, COPPE/Sistemas, Universidade Federal do Rio
de Janeiro, Brazil
nicodemos@cos.ufrj.br
The second author was partially supported by ANR project Stint (ANR-13-BS02-0007), ANR
project GATO (ANR-16-CE40-0009-01), and by LabEx PERSYVAL-Lab (ANR-11-LABX-0025).
Laboratoire G-SCOP, Univ. Grenoble Alpes, France
matej.stehlik@grenoble-inp.fr
We show that any 3-connected cubic plane graph on n vertices,
with all faces of size at most 6, can be made bipartite by deleting
no more than √((p+3t)n/5) edges, where p and t are the numbers
of pentagonal and triangular faces, respectively. In particular, any such
graph can be made bipartite by deleting at most √(12n/5) edges.
This bound is tight, and we characterise the extremal graphs. We deduce
tight lower bounds on the size of a maximum cut and a maximum independent
set for this class of graphs. This extends and sharpens the results of
Faria, Klein and Stehlík [SIAM J. Discrete Math. 26 (2012) 1458–1469].
[
Matěj Stehlík
=================
§ INTRODUCTION
A set of edges intersecting every odd cycle in a graph is known as an
odd cycle (edge) transversal, or odd cycle cover, and the
minimum size of such a set is denoted by τ_. A set of edge-disjoint
odd cycles in a graph is called a packing of odd cycles, and the maximum
size of such a family is denoted by ν_. Clearly, τ_≥ν_.
Dejter and Neumann-Lara <cit.> and independently
Reed <cit.> showed that in general, τ_ cannot be bounded
by any function of ν_, i.e., they do not satisfy the
Erdős–Pósa property. However, for planar graphs, Král' and
Voss <cit.> proved the (tight) bound τ_≤ 2ν_.
In this paper we focus on packing and covering of odd cycles in 3-connected
cubic plane graphs with all faces of size at most 6. Such graphs—and their
dual triangulations—are a very natural class to consider, as they correspond
to surfaces of genus 0 of non-negative curvature (see e.g. <cit.>).
A much-studied subclass of cubic plane graphs with all faces of size at most 6
is the class of fullerene graphs, which only have faces of size 5 and 6.
Faria, Klein and Stehlík <cit.> showed that any fullerene
graph on n vertices has an odd cycle transversal with no more than
√(12n/5) edges, and characterised the extremal graphs. Our main result
is the following extension and sharpening of their result to all 3-connected cubic
plane graphs with all faces of size at most 6.
Let G be a 3-connected cubic plane graph on n vertices with all faces of size at most
6, with p pentagonal and t triangular faces. Then
τ_(G) ≤√((p+3t)n/5).
In particular, τ_(G) ≤√(12n/5) always holds, with equality
if and only if all faces have size 5 and 6, n=60k^2 for some
k ∈, and (G) ≅ I_h.
If G is a fullerene graph, then t=0 and Euler's formula implies that p=12, so
Theorem <ref> does indeed generalise the result of Faria, Klein and Stehlík <cit.>.
We also remark that the smallest 3-connected cubic plane graph with all faces of size at most 6
achieving the bound τ_(G) = √(12n/5) in Theorem <ref> is the ubiquitous
buckminsterfullerene graph (on 60 vertices).
The rest of the paper is organised as follows. In Section <ref>,
we introduce the basic notation and terminology, as well as the key concepts from
combinatorial optimisation and topology. In Section <ref>, we introduce
the notions of patches and moats, and prove bounds on the area of moats. Then, in
Section <ref>, we use these bounds to
prove an upper bound on the maximum size of a packing of T-cuts in triangulations
of the sphere with maximum degree at most 6. Using a theorem of Seymour <cit.>,
we deduce, in Section <ref>, an upper bound on the minimum
size of a T-join in triangulations of the sphere with maximum degree at most 6,
and then dualise to complete the proof of Theorem <ref>. In
Section <ref>, we deduce lower bounds on the size of a maximum cut and
a maximum independent set in 3-connected cubic plane graphs with no faces of size
more than 6. Finally, in Section <ref>, we show why the condition
on the face size cannot be relaxed, and briefly discuss the special case when the graph
contains no pentagonal faces.
§ PRELIMINARIES
Most of our graph-theoretic terminology is standard and follows <cit.>.
All graphs are finite and simple, i.e., have no loops and parallel edges.
The degree of a vertex u in a graph G is denoted by d_G(u).
If all vertices in G have degree 3, then G is a cubic graph.
The set of edges in G with exactly one end vertex in X is denoted by δ_G(X).
A set C of edges is a cut of G if C=δ_G(X), for some X ⊆ V(G).
When there is no risk of ambiguity, we may omit the subscripts in the above notation.
The set of all automorphisms of a graph G forms a group, known as the automorphism group
(G). The full icosahedral group I_h ≅ A_5 × C_2 is the group of
all symmetries (including reflections) of the regular icosahedron. The full tetrahedral
group T_d ≅ S_4 is the group of all symmetries (including reflections) of the regular
tetrahedron.
A polygonal surface K is a simply connected 2-manifold, possibly with a boundary,
which is obtained from a finite collection of disjoint simple polygons in ^2 by
identifying them along edges of equal length. We denote by |K| the union of all polygons in
K, and remark that |K| is a surface.
Based on this construction, K may be viewed as a graph embedded in the
surface |K|. Accordingly, we denote its set of vertices, edges, and faces
by V(K), E(K), and F(K), respectively. If every face of K is incident to three edges,
K is a triangulated surface, or a triangulation of |K|.
In this case, K can be viewed as a simplicial complex.
If K is a simplicial complex and X ⊆ V(K),
then K[X] is the subcomplex induced by X, and K∖ X is the subcomplex
obtained by deleting X and all incident simplices. If L is a subcomplex of K,
then we simply write K ∖ L instead of K ∖ V(L).
If K is a graph embedded in a surface |K| without boundary, the dual graph
K^* is the graph with vertex set F(K), such that fg ∈ E(K^*) if and only if f and
g share an edge in K. The size of a face f ∈ F(K) is defined as the number
of edges on its boundary walk, and is denoted by d_K(f). Note that d_K(f)=d_K^*(f^*).
Any polygonal surface homeomorphic to a sphere corresponds to a plane
graph via the stereographic projection. Therefore, terms such as `plane triangulation'
and `triangulation of the sphere' can be used interchangeably.
We shall make the convention to use the term `cubic plane graphs' because it is so
widespread, but refer to the dual graphs as `triangulations of the sphere' because
it reflects better our geometric viewpoint.
Given a polygonal surface K, the boundary ∂ K is the set of all
edges in K which are not incident to two triangles; the number of edges in the
boundary is denoted by |∂ K|. With a slight abuse of notation, ∂ K will also denote
the set of vertices incident to edges in ∂ K. The set of interior vertices
is defined as (K)=V(K)∖∂ K.
Given a triangulated surface K, we define K to be the number of faces in K,
and the combinatorial curvature of K as ∑_u ∈(K)(6-d_K(u)).
Recall that the Euler characteristic
χ(K) of a polygonal surface K is equal to |V(K)|-|E(K)|+|F(K)|. It can be shown
that χ is a topological invariant: it only depends on the surface |K|, not on the
polygonal decomposition of K. If X is any contractible space, then χ(X)=1, and
if S^2 is the standard 2-dimensional sphere, then χ(S^2)=2. The following lemma
is an easy consequence of Euler's formula and double counting, and we leave its verification
to the reader.
Let K be a triangulated surface with (a possibly empty) boundary ∂ K. Then
∑_v ∈ K(6-d(v))+∑_v ∈∂ K(4-d(v))=6χ(K).
We remark that, if we multiply both sides of the equation by π/3, we obtain a discrete
version of the Gauss–Bonnet theorem (see e.g. <cit.>), where the curvature is concentrated at the vertices.
In order to prove Theorem <ref>, it is more convenient to work with the dual
graphs, which are characterised by the following simple lemma. The proof is an easy
exercise, which we leave to the reader.
If G is a 3-connected simple cubic plane graph with all faces of size at most 6, then the dual
graph G^* is a simple triangulation of the sphere with all vertices of degree at
least 3 and at most 6.
We will use the following
important concept from combinatorial optimisation. Given a graph G=(V,E) with a distinguished
set T of vertices of even cardinality, a T-join of G is a subset J ⊆ E
such that T is equal to the set of odd-degree vertices in (V,J). The minimum size of a
T-join of G is denoted by τ(G,T). When T is the set of odd-degree vertices of G,
a T-join is known as a postman set. A T-cut is an edge cut δ(X) such
that |T∩ X| is odd. A packing of T-cuts is a disjoint collection
δ(ℱ)={δ(X) | X ∈ℱ} of T-cuts of G; the maximum
size of a packing of T-cuts is denoted by ν(G,T).
A family of sets ℱ is said to be laminar if, for every pair
X,Y ∈ℱ, either X ⊆ Y, Y ⊆ X, or X ∩ Y = ∅.
A T-cut δ(X) is inclusion-wise minimal if no T-cut is properly contained
in δ(X). For more information on T-joins
and T-cuts, the reader is referred to <cit.>.
§ PATCHES AND MOATS
From now on assume that K is a triangulation of the sphere with all vertices of degree at most 6.
We define a subcomplex L ⊆ K to be a patch if in the dual complex K^*, the faces
corresponding to V(L) form a subcomplex homeomorphic to a disc. (Equivalently, one could say
that L ⊆ K is a patch if L is an induced, contractible subcomplex of K.)
A patch L⊆ K such that c=∑_u ∈ V(L)(6-d_K(u)) is called a c-patch.
We remark that a c-patch L has combinatorial curvature c if and only if all vertices in the
boundary ∂ L have degree 6 in K.
If u ∈ V(K) has degree 6-c, and the set X of vertices at distance at most r from u
contains only vertices of degree 6, then the c-patch K[{u}∪ X] is denoted by D_r(c).
The subcomplex of the dual complex K^* formed by the faces in V(D_r(c)) is denoted by D^*_r(c);
see Figure <ref>.
The following isoperimetric inequality follows from the work of
Justus <cit.>[Gunnar Brinkmann <cit.>
has pointed out an error in the statement and proof of <cit.>
on which <cit.> is based, but has sketched a different way to prove
<cit.>.].
Let K^* be a polygonal surface homeomorphic to a disc, with all internal vertices of
degree 3 and with n faces, all of size at most 6. Let c=∑_f ∈ F(K^*)(6-d(f)),
and suppose that c ≤ 5. Then
|∂ K^*| ≥√(8(6-c)(n-1)+(6-c)^2).
Equality holds if K^* ≅ D^*_r(c), for some integer r ≥ 0, and only
if at most one face in K^* has size less than 6.
The minimum possible values of |∂ K^*| are given in <cit.>,
for all possible numbers of hexagonal, pentagonal, square, and triangular faces.
In each case, our bound is satisfied. Moreover, it can be checked that equality
holds only if at most one face in K^* has size less than 6. Finally, if
K^* ≅ D^*_r(c), then it can be shown that |∂ K^*|=(6-c)(2r+1) and
f-1=(6-c)r(r+1)/2. Hence, |∂ K^*| = √(8(6-c)(f-1)+(6-c)^2).
We can use Lemma <ref> to deduce the following isoperimetric inequality
for triangulations. Certain special cases of the inequality were already proved by
Justus <cit.>.
Let K be a triangulation of the sphere with all vertices of degree at most 6,
and let L ⊆ K be a patch of combinatorial curvature c ≤ 5. Then
|∂ L| ≥√((6-c) L).
Equality holds if L ≅ D_r(c), for some integer r ≥ 0, and only
if at most one vertex in L has degree less than 6.
Put n=|V(K)|, and let L^* be the subcomplex of K^* formed by the faces
corresponding to V(L). By Lemma <ref>,
|∂ L^*| ≥√(8(6-c)(n-1)+(6-c)^2).
Moreover, the following two equalities were shown by Justus <cit.>
2(n-1)= L-|∂ L|,
|∂ L|^2 = 14|∂(L^*)|^2+(6-c)|∂ L|-14(6-c)^2.
So, combining (<ref>), (<ref>) and (<ref>) gives
|∂ L|^2 ≥ (6-c) L.
Equality holds in (<ref>) if and only if equality holds in (<ref>). The
latter is true only if at most one face in L^* has size less than 6, or equivalently,
only if at most one vertex in L has degree less than 6. For the final
part, it is enough to note that if L ≅ D_r(c), then L^* ≅ D^*_r(c), so
equality holds in (<ref>) and therefore in (<ref>).
Let L ⊆ K be a patch. A moat of width 1 in K surrounding L
is the set ^1(L) of all the faces in F(K)∖ F(L) with at
least one vertex in V(K). More generally, we can define a moat of width w in K surrounding
L recursively as ^w(L)=^1(^w-1(L)∪ L). With a slight abuse of notation,
^w(L) will also denote the subcomplex of K formed by the faces in ^w(L).
If L is a c-patch, then ^w(L) is a c-moat of width w surrounding L.
See Figure <ref> for an example of a moat.
Under certain conditions, the area of a c-moat ^w(L) can be bounded
in terms of c, w, and L.
Let K be a triangulation of the sphere with maximum degree at most 6,
and suppose L ⊆ K is a c-patch, for some 0<c<6.
If L∪^i(L) is a c-patch, for every 0≤ i ≤ w-1, then
^w(L)≥ (6-c)w^2+2w√((6-c) L).
Equality holds if L ≅ D_r(c), for some integer r ≥ 0, and only
if at most one vertex in L has degree less than 6.
As L is contractible, its Euler characteristic is χ(L)=1.
We have
c = ∑_u ∈ V(L)(6-d_K(u))
= ∑_u ∈ L(6-d_L(u))+∑_u ∈∂ L(6-d_L(u))-^1(L)
= ∑_u ∈ L(6-d_L(u))+∑_u ∈∂ L(4-d_L(u))+2|∂ L|-^1(L).
Hence, by Lemma <ref>,
2|∂ L|+6-c = ^1(L).
The dual complex K^* is homeomorphic to the sphere and the subcomplex L^*
formed by the faces corresponding to V(L) is homeomorphic to a disc, so by the
Jordan–Schoenflies theorem, the subcomplex formed by the faces corresponding
to V(K)∖ V(L) is also homeomorphic to a disc. Hence, K ∖ L is
also a patch. Moreover, K has Euler characteristic χ(K)=2, so by
Lemma <ref>, ∑_u ∈ V(K)(6-d(u))=12. Therefore,
∑_u ∈ V(K∖ L)(6-d_K(u))=12-c, i.e., K ∖ L is a
(12-c)-patch. Applying (<ref>) to L and to
K∖ L,
2|∂ L|+6-c = ^1(L)
= ^1(K∖ L)
= 2|∂(K∖ L)|+6-(12-c).
Hence, |∂(L ∪^1(L))|=|∂(K∖ L)|=|∂ L|+6-c, so by
induction, and the fact that L ∪^i(L) is a patch for all 0 ≤ i ≤ w-1,
|∂(L ∪^i(L))|=|∂ L|+(6-c)i.
By (<ref>) and (<ref>),
^1(L ∪^i(L)) = 2|∂(L ∪^i(L))|+6-c
= 2(|∂ L|+(6-c)i)+6-c
= 2|∂ L|+(6-c)(2i+1),
so the area of ^w(L) is
^w(L) = ∑_i=0^w-1^1(L ∪^i(L))
= ∑_i=0^w-1(2|∂ L|+(6-c)(2i+1))
= 2w|∂ L|+(6-c)w^2.
The combinatorial curvature of L is at most c, so by Lemma <ref>,
|∂ L| ≥√((6-c) L),
with equality if L ≅ D_r(c), for some integer r ≥ 0, and only
if at most one vertex in L has degree less than 6.
§ PACKING ODD CUTS IN TRIANGULATIONS OF THE SPHERE WITH MAXIMUM DEGREE AT MOST 6
We now relate certain special types of packings of T-cuts to packings of
1-, 3- and 5-moats.
Let K be a triangulation of the sphere with all vertices of degree at most 6,
and let T be the set of odd-degree vertices in K. There exists a family
ℱ on V(K) and a vector w ∈^|ℱ| with the following
properties.
* ℳ=⋃_X ∈ℱ_K^w_X(X) is a packing of
moats in K;
* The total width of ℳ is ∑_X ∈ℱw_X=ν(K,T);
* For every X ∈ℱ, the subcomplex K[X] is a patch;
* Every ^w_X(X) ∈ℳ is a 1-, 3-, or 5-moat in K;
* If X is an inclusion-wise minimal element in ℱ, then |X|=1;
* ℱ is laminar.
Consider a packing δ(ℱ') of inclusion-wise minimal T-cuts in K of size
of ν(K,T). Note that ∑_u ∈ X(6-d_K(u)) is odd, for every X ∈ℱ.
Since ∑_u ∈ X(6-d_K(u))=12 and δ(X)=δ(V(K)∖ X), we can assume that
∑_u ∈ X(6-d_K(u))≤ 5; otherwise we could replace X by V(K)∖ X in
δ(ℱ'). Finally, we can also assume that,
subject to the above conditions, ℱ' minimises ∑_X ∈ℱ' |X|.
We remark that ℱ' is a laminar family. Indeed, suppose that X,Y ∈ℱ',
X ∩ Y ≠∅, X ⊈Y and Y ⊈X. Then
^1(X)∩^1(Y)≠∅, so there is a face {u,v,w} of K in
^1(X)∩^1(Y). Since
|δ(X)∩{uv,uw,vw}|=|δ(Y)∩{uv,uw,vw}| = 2,
it follows that δ(X)∩δ(Y)≠∅, contradicting the fact that
ℱ' is a packing of T-cuts. Hence, ℱ' is laminar.
We summarise the properties of the family ℱ' below.
* δ(ℱ') is a packing of T-cuts;
* |ℱ'|=ν(K,T);
* δ(X) is an inclusion-wise minimal cut,
for every X ∈ℱ';
* ∑_u ∈ X(6-d_K(u)) ∈{1,3,5} for all X ∈ℱ';
* Subject to <ref>–<ref>, ℱ'
minimises ∑_X ∈ℱ' |X|;
* ℱ' is laminar.
We let ℱ be the subfamily of ℱ' consisting of the
elements X ∈ℱ' such that
∑_u ∈ Y(6-d_K(u))<∑_u ∈ X(6-d_K(u)),
for every Y ∈ℱ such that Y ⊆ X.
For each X ∈ℱ, let
ℱ'_X={Y ∈ℱ' : X ⊆ Y, ∑_u ∈ Y(6-d_K(u))=∑_u ∈ X(6-d_K(u))},
and let w_X=|F'_X|.
To prove <ref>, we use an argument very similar to the one we used to
prove <ref>.
Clearly, for every X ∈ℱ, ^w_X(X)=⋃_Y ∈ℱ'_X^1(Y)
is a moat around X of width w_X. Let X,Y ∈ℱ, and suppose that
^w_X(X) ∩^w_Y(Y) ≠∅. Then there exists a face
{u,v,w}∈ F(K) and sets X',Y' ∈ℱ' such that X ⊆ X',
Y ⊆ Y', and
|δ(X')∩{uv,uw,vw}|=|δ(Y')∩{uv,uw,vw}| = 2.
But then δ(X') ∩δ(Y') ≠∅, so by <ref>,
X'=Y'. Hence, by the construction of ℱ, X=Y. This proves <ref>.
To prove <ref>, it suffices to note that ∑_X ∈ℱw_X=|ℱ'|=ν(K,T)
by <ref>.
The property <ref> follows immediately from <ref>;
indeed, since δ(X) is
an inclusion-wise minimal cut, the dual edges form a cycle, so by the
Jordan–Schoenflies theorem, the subcomplex of K^* formed by the faces in
X is homeomorphic to a disc, so K[X] is a patch.
Since ℱ⊆ℱ', <ref> follows immediately
from <ref> and <ref> follows immediately from <ref>.
To prove <ref>, let X be an inclusion-wise minimal element
of ℱ. By the definition of ℱ, X is also an inclusion-wise
minimal element of ℱ'. Since ∑_u ∈ Xd(u) is odd, at least one
vertex in X has odd degree. If |X|>1, let u be a vertex of odd degree in X,
and let ℱ”=(ℱ'∖ X) ∪{u}. Then ℱ” satisfies
<ref>–<ref>, but ∑_X ∈ℱ” |X|<∑_X ∈ℱ' |X|,
contradicting <ref>.
Lemmas <ref> and <ref> can be used to prove
the following upper bound on the maximum size of a packing of odd cuts in
spherical triangulations with all vertices of degree at most 6,
which may be of independent iterest. By taking the planar dual, we also get
an upper bound on ν_ for the class of 3-connected cubic plane
graphs with all faces of size at most 6.
Let K be a triangulation of the sphere with maximum degree
at most 6. If T is the set of odd-degree vertices of K, then
ν(K,T) ≤√( 15∑_u ∈ T(6-d(u)) K).
In particular, ν(K,T) ≤√(12 K/5) always holds,
with equality if and only if all vertices have degree 5 and 6, K=60k^2
for some k ∈, and (K) ≅ I_h.
Let ℳ=⋃_X ∈ℱ_K^w_X(X) be a packing of 1-, 3- and
5-moats in K of total width ∑_X ∈ℱw_X=ν(K,T), as
guaranteed by Lemma <ref>. Let m_c be the total area of
c-moats of ⋃_X ∈ℱ_K^w_X(X), where
c∈{1,3,5}. Define the incidence vectors r, s, t ∈ℝ^|T| as
follows: for every u ∈ T, let r_u, s_u, t_u be the width of the 1-moat,
3-moat and 5-moat surrounding u, respectively.
Define the inner product ⟨·,·⟩ on ^|T| by
⟨ x,y ⟩ = ∑_u∈ T(6-d(u)) x_uy_u and the norm
· by x=⟨ x,x ⟩. With this inner product,
the total width of 1-, 3- and 5-moats in
⋃_X ∈ℱ_K^w_X(X) can be expressed as
⟨ r,1⟩, 13 ⟨ s,1⟩, and
15 ⟨ t,1⟩, respectively. Therefore,
ν(K,T)=∑_X ∈ℱw_X=⟨ r+13s+15t,1⟩.
To prove the inequality in Theorem <ref>, we compute lower bounds on
m_1, m_3 and m_5 in terms of the vectors r, s and t, and then use
the fact that the moats are disjoint, so the sum m_1+m_3+m_5 cannot exceed f,
the number of faces of K. Simplifying the inequality gives the desired bound.
To bound m_1, recall that by property <ref> of Lemma <ref>,
every 1-moat in ℳ is of the form _K^r_u(u), where u is a
5-vertex in K. By Lemma <ref>,
_K^r_u(u) = (6-(6-d(u)))r_u^2= 5r_u^2,
and summing over all 1-moats gives the equality
m_1 = 5 ∑_u∈ T(6-d(u))r_u^2 = 5r^2.
To bound m_3, let _K^s_u(X) be a non-empty 3-moat in ℳ, for
some u∈ T ∩ X. By the laminarity of ℳ, the 3-patch K[X] contains
the (possibly empty) 1-moats _K^r_u(u), for all 5-vertices
u ∈ T ∩ X. All the moats are pairwise disjoint, so by (<ref>)
and the Cauchy–Schwarz inequality,
K[X] ≥∑_u ∈ T ∩ X_K^r_u(u)
≥ 5 ∑_u∈ T ∩ X(6-d(u))r_u^2
≥5(∑_u∈ T ∩ X(6-d(u))r_u)^2/∑_u ∈ T ∩ X (6-d(u))
≥5/3(∑_u∈ T ∩ X(6-d(u))r_u)^2.
Hence, by Lemma <ref>,
_K^s_u(X) ≥ 3s_u^2+2s_u√(3 K[X])
≥∑_u∈ T ∩ X(6-d(u))s_u^2+2√(5)∑_u ∈ T
∩ X(6-d(u))r_us_u .
Summing over all 3-moats gives the inequality
m_3 ≥s^2+2√(5)⟨ r,s ⟩.
To bound m_5, let _K^t_u(Y) be a non-empty 5-moat in ℳ, for
some u∈ T∩ Y. By the laminarity of ℳ, the 5-patch K[Y] contains at most one
non-empty 3-moat _K^s_u(X) of ℳ. All the moats are pairwise disjoint,
so by (<ref>), (<ref>) and the Cauchy–Schwarz
inequality,
K[Y] ≥∑_u∈ T ∩ Y_K^r_u(u)+
∑_u∈ T ∩ Y_K^s_u(X)
≥ 5 ∑_u∈ T ∩ Y(6-d(u))r_u^2+∑_u ∈ T ∩ Y
(6-d(u))(2√(5)r_us_u+s_u^2)
= 5 ∑_u∈ T ∩ Y(6-d(u))(r_u+1√(5)
s_u)^2
≥5(∑_u ∈ T ∩ Y(6-d(u))(r_u+
1√(5)s_u) )^2/∑_u ∈ T ∩ Y (6-d(u))
= (∑_u ∈ T ∩ Y(6-d(u))(r_u+1
√(5)s_u) )^2.
Using Lemma <ref>,
_K^t_u(Y) ≥ t_u^2+2t_u√(K[Y])
≥15∑_u ∈ T ∩ Y(6-d(u))t_u^2+2t_u∑_u ∈ T ∩ Y(6-d(u))(r_u+1√(5)s_u)
=∑_u ∈ T ∩ Y(6-d(u))(15t_u^2+2r_ut_u+
2√(5)s_ut_u),
with equality only if t_u=0, because K is a simple triangulation of the sphere,
and as such has no vertex of degree 1. Summing over all 5-moats gives the inequality
m_5 ≥15t^2+2⟨ r,t ⟩+2√(5)⟨ s,t ⟩,
with equality only if t=0.
The moats are disjoint, so by
inequalitites (<ref>), (<ref>) and (<ref>),
K ≥ m_1+m_3+m_5
≥ 5r^2 + s^2 + 15t^2 + 2√(5)⟨ r,s ⟩ +
2⟨ r,t ⟩ + 2√(5)⟨ s,t ⟩
= √(5)r + s + 1√(5)t^2.
Hence, by the Cauchy–Schwarz inequality and (<ref>),
√(15∑_u ∈ T(6-d(u)) K) ≥√(∑_u ∈ T(6-d(u)))r + 1√(5)s + 15t
≥⟨ r + 1√(5) s + 15 t,1⟩
≥⟨ r + 13 s + 15 t,1⟩
= ν(K,T).
This completes the proof of the first part of Theorem <ref>.
To prove the inequality ν(K,T)≤√(12 K/5), it suffices
to observe that ∑_u∈ T(6-d(u))≤ 12
by Lemma <ref>. Now suppose that
ν(K,T)=√(12 K/5). By Lemma <ref>, there
exists a packing ℳ=⋃_X ∈ℱ_K^w_X(X)
of 1-, 3- and 5-moats in K of total width √(12 K/5).
Then ∑_u ∈ T(6-d(u))=12, i.e., all
vertices of degree less than 6 have odd degree, namely, 3 or 5.
Equality holds in (<ref>) and in (<ref>), so t=s=0.
Furthermore, equality holds in (<ref>), so there is a natural
number k ≥ 1 such that r_u=k for every u ∈ T. Therefore, every u ∈ T has
degree 5, so |T|=12. By Lemma <ref> each moat
_K^k(u) ∈ℳ has area 5k^2, so K=12·5 k^2=60k^2.
Hence, K is the union of twelve face-disjoint 1-moats ^k(u),
for u ∈ T (see Figure <ref>). Each ^k(u) can be identified
with a face of a regular dodecahedron, which shows that (K) contains
a subgroup isomorphic to I_h. On the other hand, the dual graph of K is
a fullerene graph, and it can be shown (see e.g. <cit.>) that the largest
possible automorphism group of a fullerene graph is isomorphic to I_h. Hence,
(K) ≅ I_h.
Conversely, suppose K is a triangulation of the sphere with K=60k^2,
all vertices of degree 5 and 6, and (K) ≅ I_h. Then it can be
shown (see <cit.>) that K can be constructed by pasting
triangular regions of the (infinite) 6-regular triangulation of the plane
into the faces of a regular icosahedron (this is sometimes known in the literature
as the Goldberg–Coxeter construction). The construction is uniquely
determined by a 2-dimensional vector (i,j) ∈^2, known as the
Goldberg–Coxeter vector (see Figure <ref>).
Since (K)≅ I_h, we must have j=0 or j=i.
The area of K is given by the formula K = 20(i^2 + ij + j^2).
The condition K=60k^2 implies that the Goldberg–Coxeter vector of K is
(k,k), which means that the distance between
any pair of 5-vertices in K is at least 2k. Therefore,
⋃_u ∈ T^k(u) is a packing of 1-moats of total width
12k=12√( K/60)=√(12 K/5), so ν(K,T)≥√(12 K/5).
§ PROOF OF THEOREM <REF>
Given a triangulation K of the sphere, we construct the refinement K̂
as follows. First, we subdivide each edge of K, that is, we replace it by an
internally disjoint path of length 2, and then we add three new edges inside every face,
incident to the three vertices of degree 2. (For an illustration, see Figure <ref>.)
Therefore, every face of K is divided into four faces of K̂.
Observe that all the vertices in V(K̂)∖ V(K) have degree 6 in
K̂, so if T is the set of odd-degree vertices of K, then T
is also the set of odd-degree vertices of K̂.
The following lemma was proved in <cit.> using a theorem of
Seymour <cit.>.
If K is a triangulation of the sphere and T ⊆ V(K) is a subset
of even cardinality, then τ(K,T)=12 ν(K̂,T).
Theorem <ref> and Lemma <ref> immediately
give the following tight upper bound on the minimum size of a postman set in a plane
triangulation with maximum degree 6.
Let K be a triangulation of the sphere with f faces and maximum degree at most 6.
If T is the set of odd-degree vertices of G, then
τ(K,T) ≤√(15 ∑_u ∈ T(6-d(u)) K).
In particular,
τ(K,T) ≤√(12 K/5) always holds, with equality if and only if
all vertices have degree 5 and 6, K=60k^2 for some k ∈, and
(G) ≅ I_h.
Let K be a triangulation of the sphere with maximum degree at most 6, and
let K̂ be its refinement; observe that K̂=4 K. By
Lemma <ref> and Theorem <ref>,
τ(K,T)=12 ν(K̂,T)≤√(15∑_u∈ T(6-d(u)) K)≤√(12 K/5),
as required.
If τ(K,T)=√(12 K/5), then ν(K̂,T) = √(12· 4 K/5),
so by the second part of Theorem <ref>, all vertices in K̂
have degree 5 and 6, and this must clearly hold in K. Furthermore,
4 K=60k̂^2, for some k̂∈, so K=15k̂^2.
Since K is even, k̂=2k, for some k ∈, so K=60k^2.
We also have (K)≅(K̂) ≅ I_h.
Conversely, suppose K is a triangulation of the sphere with K=60k^2,
all vertices of degree 5 and 6, and (K)≅ I_h. By
Theorem <ref> ν(K,T)=√(12 K/5), so
τ(K,T)=√(12 K/5).
Theorem <ref> follows from Theorem <ref> by taking the planar
dual.
Let G be a 3-connected cubic plane graph on n vertices with all faces of
size at most 6, with p pentagonal and t triangular faces.
By Lemma <ref>, the dual graph G^* is a plane triangulation
with G^*=n and all vertices of degree at most 6, having exactly p vertices
of degree 5 and t vertices of degree 3. Let T be the set of vertices
of odd degree, J^* a minimum T-join of G^*, and J the set of edges of G
which correspond to J^*. Since G^*∖ J^* has no odd-degree vertices,
G∖ J=(G^*∖ J^*)^* has no odd faces, so is bipartite. By Theorem <ref>,
|J|=|J^*| ≤√(15∑_u∈ T(6-d(u))n) = √(15(p+3t)n).
In particular, |J|≤√(12n/5), with equality if and only if all faces
have size 5 and 6, n=60k^2 for some k ∈, and (G) ≅ I_h.
§ CONSEQUENCES FOR MAX-CUT AND INDEPENDENCE NUMBER
A classic problem in combinatorial optimisation, known as max-cut,
asks for the maximum size of an edge-cut in a given graph. This problem is
known to be NP-hard, even when restricted to triangle-free cubic graphs <cit.>.
However, for the class of planar graphs, the problem
can be solved in polynomial time using standard tools from combinatorial
optimisation (namely T-joins), as observed by Hadlock <cit.>.
Cui and Wang <cit.> proved that every planar, cubic graph on
n vertices has a cut of size at least 39n/32-9/16, improving an earlier
bound of Thomassen <cit.>. However, when the face size is bounded by
6, we get the following improved bound.
If G is a 3-connected cubic plane graph on n vertices with all faces of size at
most 6, with p pentagonal and t triangular faces, then G has a
cut of size at least
3n/2-√((p+3t)n/5).
In particular, G has a cut of size at least 3n/2-√(12n/5), with
equality if and only if all faces have size 5 and 6, n=60k^2 for some
k ∈, and (G) ≅ I_h.
Let G be a 3-connected cubic plane graph on n vertices with all faces
of size at most 6. Let J ⊆ E(G) be an odd cycle transversal, and
let X be a colour class of G∖ J. Then
|δ_G(X)| = 3n/2-|J|. By Theorem <ref>, we can always find J
such that |J|≤√(12n/5), with equality if and only if all faces have
size 5 and 6, n=60k^2 for some k ∈, and (G) ≅ I_h.
A set of vertices in a graph is independent if there is no edge between
any of its vertices, and the maximum size of an independent set in G is the
independence number α(G). Heckman and Thomas <cit.> showed
that every triangle-free, cubic, planar graph has an independent set of size at
least 3n/8, and this bound is tight. Again, forbidding faces of size greater
than 6 gives a much better bound.
If G is a 3-connected cubic plane graph on n vertices with all faces of size at
most 6, with p pentagonal and t triangular faces, then
α(G) ≥ n/2-√((p+3t)n/20).
In particular, α(G) ≥ n/2-√(3n/5), with equality
if and only if all faces have size 5 and 6, n=60k^2 for some
k ∈, and (G) ≅ I_h.
Every graph G contains an odd cycle vertex transversal U such that
|U| ≤τ_(G), so
α(G) ≥α(G∖ U) ≥ n/2-τ_(G)/2. Therefore, by
Theorem <ref>, α(G) ≥ n/2-√(3n/5), for every 3-connected
cubic graph G with all faces of size at most 6. When J^* is
a minimum T-join of G^*, every face of G^* is incident to at most one edge
of J^*. This means that the set J ⊂ E(G) corresponding to J^* is a
matching of G. Therefore, by Theorem <ref>, equality holds if and only
if all faces have size 5 and 6, n=60k^2 for some k ∈, and
(G) ≅ I_h.
§ CONCLUDING REMARKS
Clearly, a necessary condition for τ_=O(√(n)) is that
ν_=O(√(n)). In the case of planar graphs, the theorem of
Král' and Voss <cit.> mentioned
in the introduction guarantees that it is also a sufficient condition.
It can be shown that ν_=O(√(n)) is also a
necessary and sufficient condition for having a max-cut of size at least
3n/2-O(√(n)), and for having an independent set of size at least n/2-O(√(n)).
It is not hard to construct an infinite
family of 3-connected cubic plane graphs with all faces of size at most 7
such that τ_≥ε n, for a constant ε > 0.
This shows that the condition on the size of faces in Theorem <ref> and
Corollaries <ref> and <ref> cannot be relaxed.
To construct such a family, consider the graphs C and R in
Figure <ref>. Note that C is embedded in a disc, and R is embedded
in a cylinder. There are ten vertices on the boundary of C and also on each boundary
of R, with the degree alternating between 2 and 3. We can paste k copies of
R along their boundaries, and then paste a copy of C on each boundary of the
resulting cylinder. Assuming k>0, this gives a 3-connected cubic plane graph G on
n=15+40k vertices with all faces of size 5 and 7, such that
ν_(G) ≥ 4+5k>18n.
Finally, we remark that bounding τ_ is much simpler if the graph contains no
pentagonal faces. In this case, the bound in Theorem <ref> can be improved to
τ_(G) ≤√(tn/3), where t is the number of triangular
faces. In particular, τ_(G) ≤√(4n/3), with equality if and only if
all faces have size 3 and 6, n=12k^2 for some k ∈, and
(G) ≅ T_d. Corollaries <ref> and <ref>
can be strengthened in the same way.
plain
|
http://arxiv.org/abs/1701.07883v3 | 20170126214425 | Synchrotron intensity gradients as tracers of magnetic field | [
"A. Lazarian",
"Ka Ho Yuen",
"Hyeseung Lee",
"Jungyeon Cho"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.IM"
] |
1Department of Astronomy, University of Wisconsin-Madison
2Department of Physics, The Chinese University of Hong Kong
3Department of Physics, Chugnam University, Korea
On the basis of the modern understanding of MHD turbulence, we propose a new way of using synchrotron radiation, namely using synchrotron intensity gradients for tracing astrophysical magnetic fields. We successfully test the new technique using synthetic data obtained with the 3D MHD simulations and provide the demonstration of the practical utility of the technique by comparing the directions of magnetic field that are obtained with PLANCK synchrotron intensity datas to the directions obtained with PLANCK synchrotron polarization data. We demonstrate that the synchrotron intensity gradients (SIGs) can reliably trace magnetic field in the presence of noise and can provide detailed maps of magnetic-field directions. We also show that the SIGs are relatively robust for tracing magnetic fields while the low spacial frequencies of the synchrotron image are removed. This makes the SIGs applicable to tracing of magnetic fields using interferometric data with single dish measurement absent. We discuss the synergy of using the SIGs together with synchrotron polarization in order to find the actual direction of the magnetic field and quantify the effects of Faraday rotation as well as with other ways of studying astrophysical magnetic fields. We test our method in the presence of noise and the resolution effects. We stress the complementary nature of the studies using the SIG technique and those employing the recently-introduced velocity gradient techniques that traces the magnetic fields using spectroscopic data.
§ INTRODUCTION
This paper provides a description of a new technique for studying magnetic fields using gradients of synchrotron intensity. Gradients of synchrotron polarization have been successfully used before (see ). However, in this Letter we explore theoretically and numerically a more simple measure, namely, synchrotron intensity gradients (SIGs) and evaluate its utility for observational study of magnetic fields and accounting for the foreground contamination induced by the interstellar media within the CMB polarization studies.
Galactic and Extragalactic synchrotron emission arises from relativistic electrons moving in astrophysical magnetic field (see ). In terms of CMB and high redshift HI studies, the most important is galactic synchrotron emission. However, diffuse synchrotron emission is also a major emission arising from the interstellar medium (ISM), the intracluster medium (ICM), as well as in the lobes of radio galaxies (e.g. ). In fact, synchrotron emission provides the largest range of scales for studying magnetic fields.
Astrophysical magnetic fields are turbulent as observations testify that turbulence is ubiquitous in astrophysics
<cit.>. As relativistic electrons are present in most cases, the turbulence results in synchrotron fluctuations, which may provide detailed information about magnetic fields at different scales, but, at the same time, interfere with the measurements of CMB and high redshift HI. The latter has recently become a topic of intensive discussion (see ).
The statistics of synchrotron intensity has been studied recently in (, hereafter LP12), where it was shown how fluctuations of synchrotron intensity can be related to the fluctuations of magnetic field for an arbitrary index of cosmic rays spectrum. There it was shown that the turbulence imprints its anisotropy on synchrotron intensity and this provides a way of determining the direction of the mean magnetic field using synchrotron intensities only. The current paper explores whether, on the basis of our present-day understanding of the nature of MHD turbulence, synchrotron intensities can provide more detailed information about magnetic fields.
In what follows <ref> we discuss the theoretical motivation of this work routed in the modern theory of the of MHD turbulence,
the properties of synchrotron intensity gradients (SIGs), their calculation as well as the influence of noise and sonic Mach number are discussed in <ref>. We illustrate our method on PLANCK synchrotron data at <ref> The comparison of the SIGs technique with the technique based on the anisotropy of the correlation functions of intensity is presented in <ref>, the synergy with other techniques of magnetic field studies is outlined in <ref>. We present our summary in <ref>.
§ THEORETICAL CONSIDERATIONS
§.§ MHD turbulence and magnetic field gradients
Dealing with synchrotron emitting media, we deal with the non-relativistic thermal magnetized plasma and relativistic electrons. The turbulence in magnetized relativistic and non-relativistic fluids are different (see ). However, following the accepted approach, we consider turbulence in magnetized fluid separately from the fluid of cosmic rays. In other words, we consider that relativistic electrons just illuminate the structure of magnetic field that is created by non-relativistic MHD turbulence. This approach has its limitations (see ) but for our further discussion this is not critically important.
While the original studies of Alfvenic turbulence done by <cit.> and <cit.> were based a hypothetical model of isotropic MHD turbulence, the
later studies (see ) uncovered the anisotropic nature of the MHD cascade.The modern theory of MHD turbulence arises from the prophetic work by , henceforth GS95). Further theoretical and numerical studies (, henceforth LV99, , see for a a review) extended the theory and augmented it with new concepts. Our theoretical motivation for the present work is based on the modern understanding of the nature of MHD turbulence that we briefly summarize below.
The GS95 theory treats the Alfvenic incompressible turbulence. The numerical simulations in <cit.> testify that for non-relativistic MHD turbulence the energy exchange between different types of fundamental modes is the effect that can be frequently neglected.[This is in contrast with the relativistic MHD turbulence where the coupling between fast and Alfvenic fundamental modes were shown to be significant <cit.>.] Therefore, in non-relativistic compressible astrophysical media one can consider three distinct cascades, namely, the cascade of Alfven, slow and fast modes.[We use the word "modes" rather than "waves", as the properties of the magnetic perturbations can be very different from those of waves. For instance, Alfven modes in GS95 turbulence are not oscillatory and after one period undergo cascading.] Therefore the GS95 treatment is applicable to describing Alfvenic modes also in compressible fluids.
Alfven modes initially evolve by increasing the perpendicular wavenumber in the subAlfvenic regime, i.e. for the injection velocity v_L being less than the Alfven velocity V_A, while the parallel wavenumber stays the same. This is the regime of weak turbulence with the spectrum E(k)∼ k^-2 (see LV99, ). This is not yet the regime of GS95 turbulence, but, nevertheless, the increase of the perpendicular wave number means the modes get more and more perpendicular to the magnetic field. In Alfvenic turbulence, the magnetic field and velocity are symmetric and therefore the aforementioned situation means that both the gradients of magnetic field and velocity is getting aligned perpendicular to the direction of the magnetic field.
Weak Alfvenic turbulence can be viewed as the interaction of wave packets with a fraction of the energy of cascading as a result of such an interaction. As the perpendicular wavenumber increases, this fraction gets larger and eventually becomes ∼ 1 (see ). This is the maximal fraction of energy that can be transferred during the wavepacket interaction. However, the equations dictate the necessity of further increase of perpendicular wavenumber as the result of the interaction of oppositely moving wavepackets. This can only be accomplished through the simultaneous increase of the parallel wavenumber. This happens at the transition scale l_trans≈ L M_A^2, where L is the turbulence injection scale and M_A=v_L/V_A is the Alfven Mach number (see LV99, ). This is the stage of the transfer of the strong or GS95 regime of turbulence. At this stage, the so-called critical balance condition should be satisfied, which states that the time of the interaction of the oppositely moving wavepackets
l_/V_A, where l_ is the parallel to magnetic field scale of the wavepacket is equal to the perpendicular shearing time of the wavepacket l_/v_l, where l_ is the perpendicular scale of the wavepacket and v_l is the turbulent velocity associated with this scale. For the subAlfvenic turbulence this is how the cascade proceeds in the strong regime with the wavepackets getting more and more elongated according to (see LV99):
l_≈ L (l_/L)^2/3 M_A^-4/3,
which testifies that for l_≪ L the parallel scale of the wavepackets gets much larger than the perpendicular scale. This means that
the wavepackets/eddies get more and more elongated as the perpendicular scale l_ decreases. As a result, both the velocity and magnetic field gradients get more and more aligned perpendicular to the magnetic field direction as l_ decreases. In fact, the increase of the disparity of the parallel and perpendicular scales continues until the energy reaches the dissipation scale.
The magnetic field direction is changing in the turbulent flow. Therefore the important question that arises is what the direction of the magnetic field should be used in the arguments above. In other words, it is important to understand how the parallel and perpendicular directions to measure and are defined.
Most of the earlier MHD turbulence work assumed the perturbative approach and thus the mean field direction was used. This was also an implicit assumption in GS95 study. However, in the studies that followed the groundbreaking GS95 paper (namely, LV99, ) it was shown that it is not correct to measure the directions in respect to the mean magnetic field. Instead, one should use the local magnetic field that surrounds the wavepacket [l_, l_].
The importance of using of local system of reference is the most evident if arguments related to the fast turbulent reconnection are employed (LV99). Indeed, there it was shown in LV99 that the magnetic reconnection happens within a turnover time of an eddy and therefore the motions of fluid perpendicular to the magnetic field lines are similar to hydrodynamic eddies. As magnetic field lines reconnect fast, the mixing motion of perpendicular to the local direction of magnetic field does not create magnetic tension. As a result, the formation of such eddies provides the path of least resistance compared to any other motions involving magnetic field line bending. Naturally, the turbulent energy is channeled along this path of the least resistance. The hydrodynamic-type cascade of energy associated of these eddies is ∼ v_l^2/t_cas=const, with the cascading time given by the eddy turnover time l_/v_l. From what we said above, it is evident that l_ has to be measured perpendicular to the magnetic field direction at the location of the eddy, rather than to the direction of the mean magnetic field. Incidentally, this hydrodynamic-type cascading that we describe provides the Kolmogorov scaling for the perpendicular turbulent motions, i.e. from v_∼ l_^1/3, that is the GS95 prediction. As the eddies rotate around the local magnetic field direction they flop sending the Alfven waves along the magnetic field. The corresponding Alfven wave period is equal to the eddy turnover time, i.e. l_/V_A≈ l_/v_l. The latter coincides with the "critical balance" condition in GS95. Combining this with the Kolmogorov scaling of the perpendicular motions, one can get the l_ł∼ l_^2/3 (see Eq. (<ref>). We would like to stress that this derivation dictates that l_ is aligned with the magnetic field of the eddy and l_ is the size of the eddy perpendicular to the local magnetic field. The corresponding numerical studies () show that the aforementioned relations between l_ and l_ are correct only in the local system of reference given by local magnetic field that the eddy interacts with.[The GS95 relation between l_ and l_ isnnot valid in the system of reference related to the mean magnetic field. Observations in most cases sample turbulence along the line of sight over distances much larger than the scale of the sampled eddy. In this situation the measurements take place in respect to the mean magnetic field (see ) and this prevents the observational testing the GS95 anisotropy.]
The anisotropy of turbulence given by the aspect ratio of the eddies (see Eq. (<ref>)) is increasing with the decrease of the scale. Therefore the smaller the eddy the better it traces the local direction of the magnetic field. At the same time, one can estimate the velocity gradients that scale as v_l/l_∼ l_^-2/3. The latter relation means that the largest gradients correspond to the smallest eddies. Thus 3D gradients in Alfvenic turbulence should be dominated by the gradients of the smallest eddies and therefore the measured gradients should be perpendicular to the local direction of the magnetic field as it is traced by the smallest resolved eddies. Magnetic field and velocity enter in a symmetric way in Alfvenic turbulence and therefore the gradients of turbulent magnetic field should have the same property as the gradients of velocity. As gradients are linear operation then if we have a quantity that is an integral of magnetic fluctuations along the line of sight, as this is the case of synchrotron fluctuations, the gradient and integral operation can be interchanged and thus the observed 2D measure can be presented as an integral of magnetic field gradients. This shows that the observed quantities, which could be the integrated along the line of sight components of fluctuating velocities as in our papers on tracing magnetic field with velocity gradients <cit.>. In our present paper that suggests the way of magnetic field tracing with synchrotron, the gradients are dominated by the smallest eddies, that are most aligned with magnetic field.
Naturally, Eq. (<ref>) provides only the most probable relation between the parallel and perpendicular scales. The distribution function relating l_ and l_ was obtained in <cit.> on the basis of numerical simulations. Its analysis, however, shows that the uncertainty in the gradient direction is of the order of l_/l_∼ (l_/L)^1/3 indicating that the smaller the resolved eddies, the better the gradients trace magnetic fields. In other words, the most probable directions of the Alfven wavevectors are limited by the GS95 cone given by Eq. (<ref>), while the probability of the vectors to be beyond this cone is exponentially suppressed.[<cit.> argued that the actual distribution can be best represented by the Castaing function <cit.> which is smooth near zero but looks like an exponential over a broad range.]
We note that all the arguments above are relevant to subAlfvenic turbulence. They are suggestive that the gradients of the velocities and magnetic fields are aligned with the local magnetic field and therefore sample the local direction of the magnetic field flux at the largest of the two scales, one is being the turbulence dissipation scale, the other is the telescope resolution scale.
This point is very important for the technique that we are going to propose. In this paper we are claiming that by measuring the magnetic field gradients one can trace the turbulent magnetic-field in the volume under study.
The discussion above was focused on the Alfvenic turbulence. In incompressible conducting fluid in 3D, apart of Alfven modes, the pseudo-Alfven modes exist (see GS95). The latter is the limiting case of slow modes in the incompressible limit. Pseudo-Alfven modes and, in general, the slow modes are slaved by Alfvenic modes, that shear them both in the case of magnetically dominated (low-β) and gas-pressure dominated (high-β)
plasmas (GS95, ). Thus we expect that the slow modes will also show the properties of the magnetic gradients similar to the Alfven waves.
The third fundamental MHD turbulent mode is the fast mode. The fast modes are different from both Alfven and slow modes. The fast modes create an accustic-type cascade <cit.> which marginally cares about magnetic-field direction. In terms of our attempts to use gradients to trace magnetic fields, fast modes are distort the alignment. Therefore, it is important that numerical simulations indicate that the fast modes
are subdominant even for supersonic driving <cit.>. Therefore having a natural admixture of Alfven and slow modes, which is augmented by fast modes and shocks, we expect to see alignment of magnetic gradient perpendicular to the local magnetic-field direction. This is the theoretical conclusion that motivates our study below.
We may add that for the weakly compressible flows the density associated with slow waves will mimic the GS95 scalings <cit.>. However, for supersonic flows the production of shocks significantly disturbs the statistics of density. As a result, for subsonic flows density gradients are also expected to be aligned perpendicular to the magnetic field which explains the results in empirical results in <cit.> as well
as our numerical experiments with density gradients in (, henceafter GL17) and (, henceforth YL17). In terms of magnetic field tracing the density gradients are expected to be inferior to velocity and magnetic field gradients,
which is clearly shown in <cit.> where the GALFA HI intensity gradients are compared to both and velocity centroid gradients and magnetic fields as revealed by Planck data. At the same time, the density gradients can also be very important. For instance, the misalignment of density gradients and magnetic field directions can be informative in terms of shocked gas and supersonic flows.
A note about observational availability of magnetic field/velocity gradients is due. The observations probe the properties of diffuse media along the line of sight, rather than at a point in 3D space. Therefore one can wonder about the contributions from different scales that affect observationally available gradients. The magnetic gradients are can be estimated as the ratio |b(r_1)-b(r_2)|/|r_1-r_2|, where b(r) is the magnetic field at the point r. The structure function d(r)=⟨ (b(r_1)-b(r_2))^2⟩ can give us some indirect insight into the process of the summation along the line of sight. As we discuss in the next section (see Eq. (<ref>)) , the observed intensities are proportional to ∫_0^D dz H_^2, where the integration is performed along the line of sight through the diffuse volume of thickness D. Structure functions of synchrotron intensities for the general case of anisotropic turbulence were studied in detail in <cit.>. This function, i.e. S(l)=⟨ (I(e_1)-I(e_2))^2 ⟩, where l is the distance between the lines of sight, is roughly proportional to
∫ [d((l^2+z^2)^1/2)-d(z)] dz,
where the integration is performed along the line of sight. For the Kolmogorov-type turbulence it is possible to show that the major contribution to the integral is coming from the scales close to l.
Magnetic and velocity perturbations are symmetric within Alfvenic turbulence. However, the differences are obvious in compressible flows. For instance, shocks distort the velocity structure creating velocity gradients parallel to the local direction of magnetic field. This effect is not present for magnetic field, which gradients are more robust to compressible turbulence. Similarly, the velocity gradients for the flows affected by gravitational collapse tend to be parallel to the ambient magnetic field <cit.> This is not expected for magnetic field gradients making them a more robust measure of the line of sight integrated magnetic field.
§.§ Synchrotron gradient properties
Synchrotron emission arises from relativistic electrons spiraling about magnetic fields (see references therein). A quantitative study of the synchrotron emission (see ) revealed that the emission is non-linear in the magnetic field H with the origin of nonlinearity arising from relativistic effects. For the power law distribution of electrons N(E) dE∼ E^α dE, the synchrotron emissivity is
I_synch(x, y) ∝∫ dz H_^γ (x, y, z),
where H_=√(H_x^2+H_y^2) corresponds to the magnetic field component perpendicular to the line of sight, the latter given by the z-axis. The fractional power of the index γ= (α +1)/2
was a impediment for quantatitive synchrotron statistical studies. However, the problem of the magnetic field dependence on the fractional power was dealt with in <cit.>, where it was shown that the correlation functions and spectra of H_^γ can be expressed as a product of a known function of γ times the statistics of H_^2, i.e. the synchrotron intensity obtained for α=3. Although we do not use the correlation function approach explicitly, our approach is based on the statistical properties turbulence and we expect that similar to the case considered in <cit.> the gradients calculated with α=3 will correctly represent the results for other γ. Thus the fractional power of magnetic field in Eq. (<ref>) will not be considered as an issue within the present study aimed at determining magnetic field gradients.
In Eq. (<ref>) we disregard the dependence on the relativistic cosmic electron density. This is justified as we are interested by the gradients at the small scales at which the distribution of relativistic electrons is smooth in most parts of the diffuse media. In other words, on the basis of what we know about cosmic ray propagation (see e.g. Lazarian & Yan 2014 and ref. therein) we expect that the gradients arising from the inhomogeneities of the relativistic electrons distribution to be subdominant to the gradients arising from turbulent magnetic fields.
Alfven modes do not change the strength of magnetic field. Therefore if the line of sight is directed along the z-axis, while the
the Alfven mode is polarized in the x-y plane, the magnetic field fluctuations are perpendicular to the line of sight and therefore the observed synchrotron intensity does not change with the amplitude of the Alfven mode. However, if all 3 components of the Alfven mode are present, then the fluctuations in the z-direction result in the decrease of the observed synchrotron intensities (see LP12). Therefore while the magnetic field gradients are still in the direction perpendicular to the local magnetic field, the synchrotron intensity fluctuations are parallel to the local magnetic field direction. The statistical similarity of the Alfven and slow modes (see ) is suggestive that the slow modes behave the same way. At the same time, fast modes are expected to play a disruptive role for the magnetic field tracing. For instance, fast modes in magnetically dominated plasma correspond to the compressions of magnetic field. These compressions happen perpendicular to magnetic field and therefore the fluctuations of synchrotron intensity induced by fast modes are expected to be perpendicular the magnetic field directions. For the pressure dominated media, the so-called high β media, where β is the ratio of the gas to magnetic pressure, the fast modes are essentially sound waves and they only marginally compress magnetic field (see GS95, Cho & Lazarian 2003). This mitigates the effect of fast modes for the gradient technique in that we introduce in this paper.
§ NUMERICAL DATA
We test our theory using numerical simulations obtained from three different codes, in particular, from a 3D MHD compressible code described in ), an imcompressible code described in <cit.>, as well as the data sets used in YL17 and its subsequent studies from another 3D MHD gravity-supported compressible code family ZEUS-MP. The use of numerical results obtained with different codes is advantageous and this allows us to test better the gradient technique for various physical settings. For instance, the 3D MHD compressible code from <cit.> is a third-order accurate hybrid, that employs essentially non-oscillatory (ENO) scheme on solving the ideal isothermal MHD equations in a periodic box. For our case, we choose M_s=0.5,3.0,10.0 and M_A=0.1 for our purpose. The code from <cit.>, on the other hand, solves the periodic incompressible MHD equations using the pseudo-special code. The resultant data corresponds to the extreme case plasma β = 2M_A^2/M_s^2 = ∞. In our case, we used an incompressible cube with M_A=0.80. The respective parameters are listed in Table <ref>. We then follow <cit.> to produce both the maps of synchrotron polarization and synchrotron intensity.
§ PROPERTIES OF SYNCHROTRON INTENSITY GRADIENTS (SIGS)
§.§ Calculation of SIGs
To calculate Synchrotron Intensity Gradients (SIGs), we use the procedure for gradient calculation that we introduced in YL17. The procedure consists of three steps. We first pre-process our synchrotron intensity maps with an appropriate noise-removal Gaussian filter. We then interpolate the map to ten times its original level, and determine the gradient field by computing the maximum gradient direction in the interpolated synchrotron intensity maps. By probing the peak in gradient orientation distributions in the sub-blocks of the gradient map, we gain an estimate of the sub-block averaged gradient vector as in YL17. That allows us to compare our magnetic field predictions to those revealed by the generally accepted technique of tracing polarization. As discussed in YL17, the sub-block averaging approach provided a way of estimating how good the vector is being predicted in a block: The gradient angle distribution peaks tells the predicted value, while the shape of the distribution tells how good the prediction is. The deviation from the Gaussian function provides an error estimate for us to judge whether our method is accurate within a block.
§.§ SIGs from Alfven, Slow and Fast modes
To illustrate the applicability of our theoretical considerations related to synchrotron intensity gradients by providing the comparison of the SIG, we perform the mode decomposition that similar to that in <cit.>. The corresponding equations determining the basis for the decomposition into modes are:
ζ̂_f ∝ (1+β+√(D)) (k_⊥k̂_⊥) +(-1+β+√(D)) k_||k̂_||
ζ̂_s ∝ (1+β-√(D)) (k_⊥k̂_⊥) +
(-1+β-√(D)) k_||k̂_||
ζ̂_f ∝ -k̂_⊥×k̂_||
where D=(1+β/2)^2-2βcosθ, β=P̅_g/P̅_B=2M_A/M_s, and cosθ= k̂_||·B̂.
We would only use the LOS component of the decomposed velocities for magnetic field calculations.That is to say, the three velocity modes can then be acquired by
b_(f,s,a),z= [ℱ^-1(ℱ(b)·ζ̂_f,s,a)](ζ̂_f,s,a·ζ̂_LOS)
where ℱ is the Fourier transform operator.
Figure <ref> exhibits synthetic observations produced with the separated MHD modes from one of the cube in the ZEUS family simulation. It shows that, as we expected, that the SIGs are aligned parallel to magnetic field for the case of Alfven and slow modes. At the same time, also as expected, for the case of magnetically dominated media, the SIGs are perpendicular to the plane-of-sky projected component of the magnetic field. In our simulations the ratio of the Alfven, slow and fast modes is 1:0.7:0.3. Therefore, indeed, the effect of the fast modes is subdominant and we expect that for the actual simulations without any decomposition the SIGs will trace magnetic field. This is what we test below.
§.§ SIGs: Effect of Block size
Figure <ref> demonstrates that out approach can deliver SIGs in a robust way with the magnetic-field directions obtained with SIGs providing an adequate representation of the projected magnetic field. To demonstrate the latter point in Figure <ref> we also show the magnetic field directions as traced by the synchrotron polarization in the synthetic observations.
To quantify how well the SIGs are tracing the synchrotron polarization that represents the projected magnetic field we introduce the alignment measure:
AM=2⟨cos^2θ-1 ⟩,
where θ is the angle between the SIG direction and magnetic field direction derived from Planck synchrotron polarization measurements, which AM ∈ [-1,1]. When AM=1, that indicates a perfect alignment between SIGs to magnetic field. When it becomes zero, there are no relation between SIGs and magnetic field. When AM=-1, SIGs tend to be perpendicular to magnetic field.
The alignment between the SIGs and the magnetic field increases with the block size, but we observe a relatively small increase starting with a particular block size. This effect is demonstrated by the the left-hand-side panel of Figure <ref> that shows that starting with the block size ∼ 64 pixels the increase of the AM with the block size get very slow . Observationally this size provides the optimal block size for the analysis that maximizes the informational output of the SIG technique. Note, that the rapid decrease of the alignment measure as the block size decreases is strongly affected by the numerical resolution. Our analysis of the spectral slope of the turbulence on the right of Figure <ref> indicates that starting with scales k ∼ 40, which is about r = 512/40 = 12.8 pixels, the structures that we see are dominated by the numerical effects. For the part of the turbulent cascade that is dominated by numerical effects we do not expect to observe the GS95 or weak turbulence scalings of the turbulent magnetic field. Therefore it is not surprising that the SIG technique fails. In fact, this is in a good agreement with the fact that for block size less and ∼ 16 we do not see good alignment, as shown in the left of Figure <ref>. At the same time, this is suggestive that for the actual low-noise astronomical observations the size of the optimal block may be smaller than 64 pixels that we find in our numerical simulations. Indeed, unlike numerical simulations, the astrophysical turbulence exhibits an extensive inertial range with the dissipation scale too small to be resolved by observations (see Chepurnov & Lazarian 2009).
§.§ SiGs: effect of the sonic and Alfven Mach numbersr
For most of the environments of the spiral galaxies the areas dominating the synchrotron emission may correspond to the hot gas with low sonic Mach numbers M_s. However, it is interesting to explore to what extend the effects of compressibility can affect the SIG technique. Therefore test how the SIGs trace magnetic field in systems with different M_s.The upper panels of Figure <ref> show the relative alignment of polarization and SIG. We observe the alignment decreases with the increase of M_s which is also supported by the lower panels of Figure <ref> where the distribution of the SIGs about the magnetic field direction is shown. We note that there exist different ways of measuring the turbulence sonic Mach number and the studies like those illustrated in Figure <ref> allow to evaluate the accuracy of magnetic field tracing using the SIGs.
We also test the effect of Alfvenic Mach number to the alignment of SIGs on magnetic field in Figure <ref>. Even with high Alfvenic Mach number (M_A=3.2) our method is still very good on tracing magnetic field, with AM∼0.71. Not to mention, the sub-Alfven case has extremely good alignment. Notice that the alignment of SIGs in incompressible cases are significantly higher than that of the compressible cases. The possible reasons for this are e.g. creation of misaligned fast modes (See Figure <ref>) or shocks formed due to compression of fluids.
§.§ SIGs: effects of Gaussian noise
Real observational data is affected by noise. To gauge the effect, we test to what extend the alignment persists in the presence of noise. We calculate the SIGs by adding white noise to our synthetic maps,
The noise is included in the data in the following way. We generate white noise such that the noise amplitude is Gaussian with mean value zero. The noise level is defined as the standard deviation of the noise distribution. The resultant noise is added to the original map. The noise level is selected to be the multiple of 0.1 of the mean synchrotron intensity, extended to a maximum equals to the mean synchrotron intensity.
We treat the synthetic data as it if it were the real observational data. For this purpose, we analyze our noisy data using pre-processing Gaussian filters <cit.>, which is a procedure frequently used as a noise reduction tool in observations. The smoothing effect from the Gaussian filter enables us to compute per-pixel gradient information more accurately. The strength of the filter is controlled by the width σ, which characterizes how many pixels are averaged to give the information of one pixel in the filtered map. A larger σ will suppress the noise and produce a smoother map at the expense of losing the structure of magnetic field at small scales. To see the effect of the filter on the alignment, we perform tests with several σ with the maps having different noise levels, and measure the AM of the resulting maps.
The alignment measures for various noise levels and several Gaussian filters are shown in Figure <ref>. Without the Gaussian filter pre-process, the alignment is strongly reduced, in agreement with our expectations. However, by applying Gaussian filters we can significantly improve the alignment. While for small σ the alignment decreases rapidly with the increase of the noise, a filter with larger width improves the alignment even in a strong-noise environment. This experiment demonstrates that SIGs present a robust tool that can trace magnetic fields using observational data in the presence of noise.
§.§ SIGs: effect of missing spatial frequencies
To increase the resolution of available data interferometers can be used. In fact, the detailed maps of galactic synchrotron radiation as well as synchrotron emission of nearby galaxies can be obtained with interferometers. Interferometers measure the spatial Fourier components of the image and changing the baseline of the interferometer one gets different spatial frequencies. For interferometric observations, the single dish measurements deliver low spatial frequencies. The single dish observations frequencies are not always available. Then it is important to understand how this can affect the accuracy of our SIG technique.
We note that the synchrotron polarization gradients were used in <cit.> and one of the motivations for their use was the possibility of using gradients with the interferometric data obtained without single dish observations. Below we test how the accuracy of the SIGs in tracing magnetic fields depends on the missing spacial frequencies.
In Figure <ref> we show the alignment measure given by Eq. (<ref>) using the same data as in Figure <ref> but gradually removing spatial frequencies starting with the lowest spatial frequencies from the inertial range of our data. We observe a gradual decrease of the AM. We show that by increasing the block size increases we can mitigate the effect of the absence of the lower spacial frequencies in our synthetic data.
§.§ SIGs:effect of the amplitudes of the gradients
In the present paper, similar to our earlier papers dealing with gradients (GL17, YL17, LY17), we have used the gradients to trace the magnetic field and did not account for the gradient amplitudes.
The gradient amplitudereflects how spacial rate of the change. For example, in shock-dominated regions, we expect to see a sharp change on the magnitude across the shock boundary. On the other hand, in self-gravitated regions the gradient amplitude should increase significantly due to rapid infalling gas motions. Such events are not related to MHD turbulence and therefore one may expect that the magnetic gradients from such events are not being aligned perpendicular to the local magnetic field. At the same time, our current numerical study is limited to diffuse media, for which such a sharp changes of gradient magnitude are not common. Figure <ref> shows how the AM changes with the gradient amplitude. With the parameters we use, we do not see a clear tendency. The corresponding study will be done elsewhere.
§ COMPARISON WITH THE MAGNETIC FIELD TRACING IN LP12
SIGs are not the only way to trace the magnetic field with synchrotron intensity maps. For instance, anisotropic MHD turbulence also results in synchrotron anisotropies that are quantified in LP12. There the quadruple moment of the synchrotron intensity correlation functions was shown to be aligned with the magnetic field. Therefore by measuring the longer direction of the contours of the magnetic field isocorrelation (see LP12) one can approximate the magnetic-field direction over the sky.
The calculations of the correlation functions, require averaging, which in the astrophysical situations means the volume averaging. Therefore one may expect that compared to the SIGs, the LP12-type quadrupole and higher multipole anisotropies are a significantly more coarse-graded measures.
To test this statement we provide in Figure <ref> the AM for the SIGs and the similarly-defined alignment measure of the correlation function anisotropies (CFAs) that reveal the dominant quadrupole anisotropy induced by magnetic field. The directions of the CFAs longer axes of anisotropy are rotated 90 degrees to be compared with the directions of the SIGs and the magnetic field traced by synchrotron polarizations.
We compare sub-block averaged SIGs with the CFAs obtained in the same blocks. Figure <ref> clearly shows that the SIGs have a great advantage over the CFAs on tracing the detailed strucuture of magnetic field. In fact, in terms of the alignment measure, the CFAs can trace magnetic field only in a for a sufficiently coarse block size. Comparatively, the SIGs can work on smaller scales without losing much of the alignment. The ability of CFAs for the same purpose is highly limited. The limitations of the CFAs compared to the SIGs is expected as the SIGs are defined for an individual eddy, while the CFAs get defined after correlation/structure functions are calculated. The latter requires the averaging over many eddies.
We, however, believe that the SIGs and the CFAs are complementary measures in a number of ways. The correspondence between coarse-graded magnetic field directions measured by the two techniques makes the tracing of magnetic field more trustworthy. Their correspondence also indicates that the performed averaging may be sufficient to use studies of the CFA anisotropies for the purpose of separating the contribution from fundamental MHD modes, i.e. Alfven, fast and slow, as it is described in LP12.
§ ILLUSTRATION OF THE SIGS TECHNIQUE USING PLANCK SYNCHROTRON DATA
The encouraging results above stimulated us to apply the SIG technique to the PLANCK synchrotron data. For our test we picked the PLANCK foreground synchrotron intensity map <cit.> and compared the magnetic-field directions that we obtained with SIGs with the magnetic-field directions as determined by the PLANCK synchrotron polarization.
We use the full-sky map to illustrate how the SIG can trace magnetic field. We use the synchrotron intensity to compute gradients, and synchrotron polarization to infer the magnetic field direction. We projected the data into the Cartesian frame, and follow the procedures described in the earlier sections and tested with the synthetic data to produce the sub-block averaged gradient map. As shown in section <ref> that σ=4 can already preserve the alignment in strong-noise environment, we reduced the noise using a σ=4 Gaussian pre-filter.
Figure <ref> shows the full-sky SIG overplotted with Planck synchrotron data. We panels in the Figure demonstrate the patches of the sky with AM>0.5. It can be seen readily that the high latitude have the best alignment, while near the galactic plane alignment is reduced. This reduction with the existence of the poorly resolved synchrotron structures not directly associated with turbulence. To such structures our technique is not applicable. However, we expect that with higher resolution when these structures are well resolved, the underlying small scale turbulence should again induce the alignment of the SIGs and the projected magnetic field.[Another potential complication related to the use of SIGs for studying magnetic field in the galactic disk plane is that distant regular structure, e.g. supernova shells, can interfere with the calculation of the SIGs that make use the resolved turbulence associated to the nearby objects. We do not discuss this effect in this paper.] At the same time, the fact that the SIGs are not influenced by the complex pattern of the Faraday rotation within the galactic disk should motivate further studies of the SIG ability to reveal magnetic fields in the galactic disk.
Our test calculations shows that the SIGs are applicable to the synchrotron intensity observations and can reveal the direction of galactic magnetic field at least at high galactic latitudes. Note, that the turbulence at such latitudes corresponds to low M_A. Naturally, more tests of the SIGs in the presence of complex magnetic-field morphology are needed. Therefore moving from our demonstration here to studies of magnetic fields in the galactic disk requires a more detailed study and will be performed elsewhere.
§ SYNERGY WITH OTHER TECHNIQUES OF MAGNETIC-FIELD STUDY
The paper above introduces a new way to trace magnetic field. It is always good to have yet another way of studying astrophysical magnetic fields. However, the advantages of the SIGs are not limited by this.
Synchrotron polarization is a generally accepted way of studying magnetic fields in our galaxy, external galaxies and galaxy clusters. One of the difficulties of using synchrotron polarization is that the polarized radiation is subject to the Faraday rotation effect. To account for this effect, multifrequency observations are performed and the Faraday rotation is compensated. This is a significant complication. Potentially going to very high frequencies makes the Faraday effect negligible. However, at high frequencies the energy loses of relativistic electrons are not negligible and their spacial distribution may be different from the low energy relativistic electrons. This complicates the analysis and may be the source of an error. Moreover, recent analytical studies in <cit.> have demonstrated that the separation of the effects of the Faraday rotation in the presence of turbulent magnetic fields is far from trivial (see also ). In this situation, the possibility of obtaining magnetic field direction using SIGs is very advantageous.
Combining the SIGs and the polarization measurements can be very synergetic. By measuring the actual direction of magnetic field using the SIGs and comparing it with the direction of polarization one can get the measure of the Faraday rotation of the media between us and the synchrotron-emitting region. In the presence of Faraday depolarization the combination of SIGs and polarized radiation presents additional advantages. For instance, the SIGs of the unpolarized synchrotron can trace magnetic fields in the distant regions subject to the Faraday depolarization, while the polarization and the SIGs measured for polarized intensities can trace the magnetic field in the regions close to the observer. As the Faraday depolarization is the function of frequency, by changing the frequency one can provide the 3D tomographic studies of the magnetic field structure. The corresponding procedures will be elaborated elsewhere. In particular, we are preparing a study revealing how the polarized intensity gradients trace magnetic fields.
The SIG technique is similar to the Velocity Centroid Gradient (VCG) technique that was introduced in GL17 and the Velocity Channel Gradient (VChG) technique introduced in LY17. Within the VCG technique, the calculation of gradients is performed using the 2D maps of velocity centroids gradients, while for the VChG technique the gradients are calculated by the intensities within the channel maps. Both measures are aimed at getting the velocity information. Indeed, the velocity centroids (see ) are known to be good measures of velocity, in particular, for subsonic turbulence. Intensities in thin velocity channels are mostly dominated by velocity caustics and therefore are also most sensitive to velocities <cit.>.[Naturally, this is the case when the turbulent broadening of the spectral lines is larger than the thermal broadening. Otherwise, the intensity variations arising from the velocity crowding induced by turbulence are exponentially suppressed <cit.> and the channel maps reflect the total intensities.] Supersonic velocity turbulence can be studied for both HI and heavier species using velocity channel maps, while for heavier species both subsonic and supersonic velocity turbulence can be studied. Both VCGs and VChGs are readily available from the Doppler-shifted spectroscopic data.
Compared to the VCGs and VChGs, the calculation of the SIGs is simpler, as it requires only synchrotron intensities, rather than full spectroscopic data. In this sense, the SIG technique is similar to tracing magnetic fields using intensity gradients (IGs) that are discussed as ways of tracing magnetic field and the ISM processes in GL17, YL17ab and LY17. The alignment of intensity gradients and densities was first discussed by <cit.> on the basis of empirical numerical studies, while the relation of this alignment with the properties of MHD turbulence was revealed in the aforementioned papers. We note, however, that from both theoretical considerations and numerical simulations of turbulence we expect the properties of MHD turbulence to be better represented by fluctuations of magnetic fields and velocities rather than fluctuations of density (see Brandenburg & Lazarian 2013 for a review). In particular, we expect the IGs to be more affected by shocks and be less reliable traces of magnetic field for supersonic turbulence. At the same time, the disadvantages of the IGs in tracing magnetic field present advantages in tracing other ISM processes, e.g. shocks. Thus it is really advantageous to search the synergy of different techniques, including the IGs. An apparent advantage of the IGs is that they can be used for the data sets where no spectral information is available, but only intensities. Therefore the IGs can be used e.g. with dust emission intensities. The VChGs smoothly transfer to the IG technique when the Doppler shift of the lines get subsonic or, for the sake of reducing the noise, the channels are made thicker.
It is clear that, in general, SIGs, VCGs, VChGs and IGs are complimentary techniques that trace magnetic field in different interstellar environments. For instance, cold and warm diffuse HI, line emission, e.g. CO emission, from molecular clouds present the natural environments for studies using the VCG and VChG techniques. Combining that with the IGs, one can study shocks and self-gravitating regions (YL17, ) and measuring the relative alignment of the directions defined by the VCGs and VChGs with those defined by the IGs one can characterize the sonic Mach number of the media (LY17). At the same time, synchrotron radiation in the Milky Way mostly originate at the large expanses of the galactic halo and to the data from these regions the SIG technique is intended to be applied. Obtaining magnetic field properties in different parts of the interstellar media is important not only for understanding the importance of the magnetic field these phase, but also for our understanding whether the same magnetic field connects different interstellar phases.
In fact, while VCGs and VChGs are useful for studying magnetic fields in Cold and Warm phases, the SIGs can study magnetic fields in Warm and Hot ISM phases (see Draine 2011 for the list of the ISM phases). The advantage of the VCGs, the VChGs and the IGs is that it is possible to combine different molecular species that are present at different densities, it is possible to study the magnetic fields and the gravitational collapse within molecular clouds. In addition, using the galactic rotation curve, it is possible to approximately map the 3D distribution of magnetic field. The 3D mapping of magnetic field is also possible combining the SIGs and gradients of polarized intensities measured at different frequencies. The latter technique requires further research, however.
The alignment of interstellar dust is a well-accepted way of tracing magnetic field. Both theoretical considerations and the observational testing (see and ref. therein) indicate that
the alignment of dust is very efficient in the diffuse media where radiative torques <cit.> are strong. The alignment can trace magnetic fields in the self-gravitating regions, but it may fail in starless molecular cloud cores. The polarization arising from grain alignment is complementary to the VCGs as it is discussed in YL17 and to the VChGs, as discussed in LY17. The dust polarimetry provides the direction of the magnetic field which through the comparison with the velocity gradients reveals the regions of the gravitational collapse. Therefore, combining the polarimetry and velocity gradients it is possible to identify the regions of molecular clouds that are subject to the gravitational infall, i.e. revealing the initial stages of star formation. At the same time, compared to dust submillimeter polarimetry, velocity gradients can provide better information where the measured magnetic fields are spatially located along the line of sight. The latter is especially valuable for studying the magnetic field in the plane of the galaxy. For instance, while the polarized dust emission present in the disk plane presents the cumulative result of emission from many clouds along the line of sight, the studies with the VChGs allow mapping magnetic fields of individual clouds. In addition, the dust disk is significantly thinner that the synchrotron halo. Therefore by comparing the results obtained FIR polarimetry with the magnetic fields revealed by synchrotron, e.g. by the SIGs, can help understanding the distribution of magnetic field with the height. Note, that in YL17 the VCGs were shown to reveal well the structure of magnetic field in the HI disk.Combining SIGs, VCGs, VChGs and the dust polarimetry one can study how magnetic fields connect Hot, Warm and Cold ISM phases with molecular clouds.
In addition, we shall mention the empirical technique of tracing magnetic field using filaments observed in HI velocity channel maps <cit.>. The relation of the observed filaments to the MHD turbulence theory requires further studies, but here we can provide some preliminary considerations. For instance, as we discussed earlier, the structures in channel maps are mostly induced by velocities and therefore we believe that the filament technique is related to velocity gradients, in particular to the VChGs. The filaments are expected to be created perpendicular to the velocity gradients, i.e. therefore the filaments are expected to be aligned along magnetic field, which corresponds to observations. The comparison of the magnetic field tracing using the VCGs, VChGs and the filaments will be provided elsewhere.
As it is clear from the discussion above, combining different velocity and synchrotron gradients, one can investigate the relative distribution of magnetic fields in different ISM phases along the line of sight. Such studies are essential for understanding of the complex dynamics of magnetized multiphase ISM. At the same time, for some regions, however, e.g. for the supernovae shocks, it seems possible and very advantageous to apply all these techniques at once.
One also should note that the tracing of magnetic field directions using velocity and magnetic field gradients is different in the regions dominated by self-gravity. It was noted in <cit.> that the velocity gradients in such regions get parallel to magnetic field, as the matter falls into gravitational wells. On the contrary, we may expect the magnetic field gradients to stay perpendicular to the magnetic field even in those situations.
In addition, it is clear from the discussion in <ref> that the most reliable magnetic-field tracing is expected in nearly incompressible turbulence in the absence of self-gravity. These are the conditions for the Warm and Hot phases of the ISM (see for the list of the idealized ISM phases). These are exactly the media that are responsible for the bulk of synchrotron radiation <cit.>.
In fact, earlier studies (e.g. ) indicated that the sonic Mach number of the synchrotron emitting Warm media is around unity. It is expected to be much less than unity for the hot coronal gas of the galactic halo. Therefore we expect that the SIGs can trace magnetic fields well and be less affected by the distortion that arises from compressibility effects. Compared to the VCGs the SIGs can be also more robust, as the VCGs are influenced by the density distribution of the emitting gas (see ) and the density is not a robust tracer of MHD turbulence statistics. VChGs when thin slices are used are marginally influenced by the turbulent densities for subsonic turbulence,[For subsonic turbulence one should use the heavier species, e.g. metals or complex molecules, that are moved by main hydrogen dominated subsonic flow. E.g. CO molecules can be used as such a tracer.] but still are affected by density for supersonic turbulence (LP00). At the same time, the synchrotron intensity fluctuations are produced by uniformly distributed electrons and thus are expected to better reflect the magnetic-field statistics.
We can add that the ways of studying VCGs, VChGs and SIGs are similar. For instance, within our present study we successfully used the way of calculating gradients first suggested in YL17. In addition, our present study also shows that SIGs similar to VCGs and VChGs can be obtained using interferometric data with missing low spatial frequencies, e.g. the interferometric data obtained without the corresponding single dish observations. This opens prospects of using these techniques for studying extragalactic magnetic fields.
Faraday rotation is an important way of studying the magnetic field component parallel to the line-of-sight (see ). The observationally attainable rotation measure (RM) is proportional to the integral of the product of the parallel to the line-of-sight component of magnetic field and thermal electron density, if the original magnetic-field direction at the source is known. The SIGs can be used to define this direction, which has advantages over the currently-used Faraday-rotation measurements that employ multifrequency polarization measurements. Moreover, SIGs can help to distinguish the Faraday rotation that arises from the source of polarized radiation and the media intervening between the source and the observer. Indeed, at the source the SIGs are measuring the actual magnetic-field direction.
A promising possibility is presented with tracing of magnetic field using aligned atoms or ions ( and ref. therein). This alignment happens for atoms/ions with fine or hyperfine structure and is induced by radiation. The Larmor precession realigns atoms/ions and thus the resulting polarization becomes dependent on the magnetic field direction. This type of alignment can potentially trace extremely weak fields in the diffuse rarefied media and we expect that this can be complementary to the SIG technique. For instance, the resulting polarization of HI arising from the atomic alignment can reveal magnetic fields. The domain of the atomic alignment are the regions of low matter density but high radiation intensity. The anisotropic radiations pumps and aligns atoms/ions, while collisions randomize spin directions.
While our discussion above has focused on the astrophysical prospects of the SIGs, the possibility of magnetic field tracing has important consequences for the CMB work. Indeed, separating of polarized foregrounds and the CMB polarization is absolutely essential for detecting and studying the enigmatic cosmological B-modes. Obtaining the actual direction of the magnetic field using the SIG technique looks very advantageous in this context, as well as combining the SIG, the VCG and the VChG measurements to weed out the polarized foreground contributions from different interstellar medium components.
§ SUMMARY
Using the theory of MHD turbulence we predicted that in magnetized flows the synchrotron intensity gradients (SIGs) are expected to reveal the magnetic field.
We successfully tested this prediction using synthetic synchrotron maps obtained with the 3D MHD compressible and incompressible simulations
as well as PLANCK synchrotron intensity and polarization data.
The new technique is complementary to the
other ways of tracing magnetic field, which includes the traditional techniques of using synchrotron and dust polarization as well new techniques that employ velocity centroid gradients (VCGs) and velocity channel gradients (VChGs). The SIGs are giving the true direction of the magnetic field in the synchrotron-emitting volume that is not distorted by the Faraday rotation effect. Therefore,
combining the SIGs with synchrotron polarimetry measurements one can determine the Faraday rotation measure. This is useful for studying line-of-sight component of magnetic field. We have demonstrated that the SIGs are a robust measure in the presence of Gaussian noise and can be obtained with interferometeric data that is obtained without single-dish telescope observations.
Acknowledgements. AL acknowledges the support of the NSF grant AST 1212096, NASA grant NNX14AJ53G as well as a distinguished visitor PVE/CAPES appointment at the Physics Graduate Program of the Federal University of Rio Grande do Norte, the INCT INEspao and Physics Graduate Program/UFRN. The stay of KHY at UW-Madison is supported by the Fulbright-Lee Fellowship. HL is supported by the research fellowship at Department of Physics, Chugnam University, Korea.
[Andersson et al.(2015)]2015ARA A..53..501A Andersson, B.-G., Lazarian, A., & Vaillancourt, J. E. 2015, , 53, 501
[Armstrong et al.(1995)Armstrong, Rickett, &
Spangler]Armstrong1995ElectronMedium
Armstrong, J. W., Rickett, B. J., & Spangler, S. R. 1995,
http://dx.doi.org/10.1086/175515The Astrophysical
Journal, 443, 209
[Beresnyak et al.(2005)Beresnyak, Lazarian, &
Cho]Beresnyak2005DensityTurbulence
Beresnyak, A., Lazarian, A., & Cho, J. 2005,
http://dx.doi.org/10.1086/430702The Astrophysical
Journal, 624, L93
[Beck(2015)]Beck15 Beck, R. 2015, Magnetic Fields in Diffuse Media, 407, 507
[Brandenburg &
Lazarian(2013)]Brandenburg2013AstrophysicalTurbulence
Brandenburg, A., & Lazarian, A. 2013,
http://dx.doi.org/10.1007/s11214-013-0009-3Space
Science Reviews, Volume 178, Issue 2-4, pp. 163-200, 178, 163
[Burkhart et al.(2012)Burkhart, Lazarian, &
Gaensler]Burkhart2012PropertiesMaps
Burkhart, B., Lazarian, A., & Gaensler, B. M. 2012,
http://dx.doi.org/10.1088/0004-637X/749/2/145The
Astrophysical Journal, Volume 749, Issue 2, article id. 145, 16 pp. (2012).,
749
[Castaing et al.(1990)]1990PhyD...46..177C Castaing, B., Gagne, Y., & Hopfinger, E. J. 1990, Physica D Nonlinear Phenomena, 46, 177
[Chepurnov & Lazarian(2010)]Chepurnov2010ExtendingData
Chepurnov, A., & Lazarian, A. 2010,
http://dx.doi.org/10.1088/0004-637X/710/1/853The
Astrophysical Journal, Volume 710, Issue 1, pp. 853-858 (2010)., 710, 853
[Cho & Lazarian(2002)]Cho2002CompressiblePlasmasb
Cho, J., & Lazarian, A. 2002,
http://dx.doi.org/10.1103/PhysRevLett.88.245001Physical
Review Letters, vol. 88, Issue 24, id. 245001, 88
[Cho & Lazarian(2003)]Cho2003CompressibleImplicationsb
—. 2003,
http://dx.doi.org/10.1046/j.1365-8711.2003.06941.xMonthly
Notices of the Royal Astronomical Society, Volume 345, Issue 12, pp.
325-339., 345, 325
[Cho et al.(2001)Cho, Lazarian, &
Vishniac]Cho2001SimulationsMedium
Cho, J., Lazarian, A., & Vishniac, E. 2001,
http://dx.doi.org/10.1086/324186The Astrophysical
Journal, Volume 564, Issue 1, pp. 291-301., 564, 291
[Cho & Vishniac(2000)]Cho2000TheTurbulence
Cho, J., & Vishniac, E. T. 2000,
http://dx.doi.org/10.1086/309213The Astrophysical
Journal, Volume 539, Issue 1, pp. 273-282., 539, 273
[Clark et al.(2015)]2015PhRvL.115x1302C Clark, S. E., Hill, J. C., Peek, J. E. G., Putman, M. E., & Babler, B. L. 2015, Physical Review Letters, 115, 241302
[Clarke & Ensslin(2006)]2006AJ....131.2900C
Clarke, T. E., & Ensslin, T. A. 2006,
http://dx.doi.org/10.1086/504076, 131, 2900
[Dolginov & Mitrofanov(1976)]1976Ap SS..43..291D
Dolginov, A. Z., & Mitrofanov, I. G. 1976,
http://dx.doi.org/10.1007/BF00640010, 43, 291
[Draine(2011)]Draine2011PhysicsMedium
Draine, B. T. 2011, Physics of the interstellar and intergalactic medium
(Princeton University Press), 540
[Draine & Weingartner(1996)]1996ApJ...470..551D
Draine, B. T., & Weingartner, J. C. 1996,
http://dx.doi.org/10.1086/177887, 470, 551
[Esquivel & Lazarian(2005)]EL05
Esquivel, A., & Lazarian, A. 2005,
http://dx.doi.org/10.1086/432458, 631, 320
[Fernandez et al.(2014)]2014MNRAS.440..298F Fernandez, E. R., Zaroubi, S., Iliev, I. T., Mellema, G., & Jelić, V. 2014, , 440, 298
[Gaensler et al.(2011)Gaensler, Haverkorn, Burkhart, Newton-McGee,
Ekers, Lazarian, McClure-Griffiths, Robishaw, Dickey, &
Green]Gaensler2011Low-Mach-numberGradients
Gaensler, B. M., Haverkorn, M., Burkhart, B., et al. 2011,
http://dx.doi.org/10.1038/nature10446Nature, Volume
478, Issue 7368, pp. 214-217 (2011)., 478, 214
[Galtier et al.(2005)]Gal2005 Galtier, S., Pouquet, A., & Mangeney, A. 2005, Physics of Plasmas, 12, 092310
[Ginzburg(1981)]1981MoIzNRG
Ginzburg, V. L. 1981, Moscow Izdatel Nauka
[Goldreich(1995)]GoldreichP.Sridhar1995GS95IITurbulence
Goldreich, P. ;Sridhar, S. 1995,
http://dx.doi.org/10.1086/174600The Astronomical
Journal, 438, 763
[González-Casanova & Lazarian(2017)]GL17 González-Casanova, D. F., & Lazarian, A. 2017, , 835, 41
[Haverkorn et al.(2006)Haverkorn, Gaensler,
McClure-Griffiths, Dickey, & Green]2006ApJS..167..230H
Haverkorn, M., Gaensler, B. M., McClure-Griffiths, N. M., Dickey,
J. M., & Green, A. J. 2006,
http://dx.doi.org/10.1086/508467, 167, 230
[Herron et al.(2016)Herron, Burkhart, Lazarian, Gaensler,
& McClure-Griffiths]2016ApJ...822...13H
Herron, C. A., Burkhart, B., Lazarian, A., Gaensler, B. M., &
McClure-Griffiths, N. M. 2016,
http://dx.doi.org/10.3847/0004-637X/822/1/13, 822,
13
[Higdon(1984)]1984ApJ...285..109H
Higdon, J. C. 1984,
http://dx.doi.org/10.1086/162481, 285, 109
[Hill et al.(2008)Hill, Benjamin, Kowal, Reynolds,
Haffner, & Lazarian]2008ApJ...686..363H
Hill, A. S., Benjamin, R. A., Kowal, G., et al. 2008,
http://dx.doi.org/10.1086/590543, 686, 363
[Iroshnikov(1964)]I64
Iroshnikov, P. S. 1964, , 7, 566
[Kandel et al.(2017)]K17 Kandel, D., Lazarian, A., & Pogosyan, D. 2017, , 464, 3617
[Kowal & Lazarian(2010)]Kowal2010VelocityScalingsb
Kowal, G., & Lazarian, A. 2010,
http://dx.doi.org/10.1088/0004-637X/720/1/742The
Astrophysical Journal, Volume 720, Issue 1, pp. 742-756 (2010)., 720, 742
[Kraichnan(1965)]K65
Kraichnan, R. H. 1965,
http://dx.doi.org/10.1063/1.1761412Physics of Fluids,
8, 1385
[Laing et al.(2008)Laing, Bridle, Parma, &
Murgia]2008MNRAS.391..521L
Laing, R. A., Bridle, A. H., Parma, P., & Murgia, M. 2008,
http://dx.doi.org/10.1111/j.1365-2966.2008.13895.x,
391, 521
[Lazarian(2006)]Lazarian2006
Lazarian, a. http://dx.doi.org/10.1086/5057962006, 4
[Lazarian(2007)]2007JQSRT.106..225L
Lazarian, A. 2007,
http://dx.doi.org/10.1016/j.jqsrt.2007.01.038,
106, 225
[Lazarian(2009)]2009SSRv..143..357L
—. 2009,
http://dx.doi.org/10.1007/s11214-008-9460-y, 143,
357
[Lazarian(2016)]L16
—. 2016,
http://dx.doi.org/10.3847/1538-4357/833/2/131,
833, 131
[Lazarian & Beresnyak(2006)]LB06 Lazarian, A., & Beresnyak, A. 2006, , 373, 1195
[Lazarian & Esquivel(2003)]LE03 Lazarian, A., & Esquivel, A. 2003, , 592, L37
[Lazarian & Pogosyan(2000)]LP00 Lazarian, A., & Pogosyan, D. 2000, , 537, 720
[Lazarian & Pogosyan(2004)]LP04 Lazarian, A., & Pogosyan, D. 2004, , 616, 943
[Lazarian & Pogosyan(2012)]LP12
Lazarian, A., & Pogosyan, D. 2012,
http://dx.doi.org/10.1088/0004-637X/747/1/5, 747,
5
[Lazarian & Pogosyan(2016)]LP16
—. 2016,
http://dx.doi.org/10.3847/0004-637X/818/2/178,
818, 178
[Lazarian & Vishniac(1999)]Lazarian1999ReconnectionField
Lazarian, A., & Vishniac, E. T. 1999,
http://dx.doi.org/10.1086/307233The Astrophysical
Journal, Volume 517, Issue 2, pp. 700-718., 517, 700
[Lazarian & Yuen(2017)]LY17 Lazarian, A., & Yuen, K. H. 2017, arXiv:1703.03119
[Lee et al.(2016)]2016ApJ...831...77L Lee, H., Lazarian, A., & Cho, J. 2016, , 831, 77
[Lithwick & Goldreich(2001)]Lithwick2001CompressiblePlasmas
Lithwick, Y., & Goldreich, P. 2001,
http://dx.doi.org/10.1086/323470The Astrophysical
Journal, Volume 562, Issue 1, pp. 279-296., 562, 279
[Liu et al.(2009)Liu, Zakamska, Greene, Strauss, Krolik,
& Heckman]2009ApJLiu
Liu, X., Zakamska, N. L., Greene, J. E., et al. 2009,
http://dx.doi.org/10.1088/0004-637X/702/2/1098,
702, 1098
[Loeb & Wyithe(2008)]2008PhRvL.100p1301L
Loeb, A., & Wyithe, J. S. B. 2008,
http://dx.doi.org/10.1103/PhysRevLett.100.161301Physical
Review Letters, 100, 161301
[Maron & Goldreich(2000)]Maron2000SimulationsTurbulence
Maron, J., & Goldreich, P. 2000,
http://dx.doi.org/10.1086/321413The Astrophysical
Journal, Volume 554, Issue 2, pp. 1175-1196., 554, 1175
[Matthaeus et al.(1983)Matthaeus, Montgomery, &
Goldstein]1983PhRvL..51.1484MMatthaeus, W. H., Montgomery, D. C., & Goldstein, M. L. 1983,http://dx.doi.org/10.1103/PhysRevLett.51.1484Physical
Review Letters, 51, 1484
[Nixon & Aguado (2008) ]NA08 Nixon, M.S. & Aguado A.S., 2008,
http://dx.doi.org/10.1016/B978-0-08-050625-8.50007-9Academic Press, 88.
[Montgomery & Turner(1981)]1981PhFl...24..825M
Montgomery, D., & Turner, L. 1981,
http://dx.doi.org/10.1063/1.863455Physics of Fluids,
24, 825
[Pacholczyk(1970)]1970ranp.book.....P
Pacholczyk, A. G. 1970, Radio astrophysics. Nonthermal processes in galactic
and extragalactic sources
[Planck Collaboration et al.(2016)]Planck15X Planck Collaboration, Adam, R., Ade, P. A. R., et al. 2016, , 594, A10
[Schnitzeler et al.(2007)Schnitzeler, Katgert, & de
Bruyn]2007A A...471L..21S
Schnitzeler, D. H. F. M., Katgert, P., & de Bruyn, A. G. 2007,
http://dx.doi.org/10.1051/0004-6361:20077635, 471,
L21
[Shebalin et al.(1983)Shebalin, Matthaeus, &
Montgomery]1983JPlPh..29..525S
Shebalin, J. V., Matthaeus, W. H., & Montgomery, D. 1983,
http://dx.doi.org/10.1017/S0022377800000933Journal of
Plasma Physics, 29, 525
[Soler et al.(2013)Soler, Hennebelle, Martin,
Miville-Deschênes, Netterfield, & Fissel]Soler2013
Soler, J. D., Hennebelle, P., Martin, P. G., et al.
http://dx.doi.org/10.1088/0004-637X/774/2/1282013, 16
[Takamoto & Lazarian(2016)]2016ApJ...831L..11T
Takamoto, M., & Lazarian, A. 2016,
http://dx.doi.org/10.3847/2041-8205/831/2/L11,
831, L11
[Westfold(1959)]1959ApJ...130..241W
Westfold, K. C. 1959,
http://dx.doi.org/10.1086/146713, 130, 241
[Yan & Lazarian(2011)]YL11 Yan, H., & Lazarian, A. 2011, , 731, 35
[Yan & Lazarian(2012)]YL12 Yan, H., & Lazarian, A. 2012, Numerical Modeling of Space Plasma Slows (ASTRONUM 2011), 459, 40
[Yuen & Lazarian(2017a)]YL17 Yuen, K. H., & Lazarian, A. 2017, , 837, L24
[Yuen & Lazarian(2017b)]YL17b Yuen, K. H., & Lazarian, A. 2017, arXiv:1703.03026
[Zhang et al.(2016)]2016ApJ...825..154Z Zhang, J.-F., Lazarian, A., Lee, H., & Cho, J. 2016, , 825, 154
|
http://arxiv.org/abs/1701.07830v1 | 20170126190003 | Molecular Gas Kinematics and Star Formation Properties of the Strongly-Lensed Quasar Host Galaxy RXS J1131-1231 | [
"T. K. Daisy Leung",
"Dominik A. Riechers",
"Riccardo Pavesi"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.CO"
] |
Department of Astronomy, Space Sciences Building, Cornell University,
Ithaca, NY 14853, USA;
We report observations of and line emission towards
the quadruply-lensed quasar
RXS J1131-1231 at z = 0.654 obtained using the
(PdBI) and the (CARMA).
Our lens modeling shows that the asymmetry in the double-horned line profile
is mainly a result of differential lensing, where
the magnification factor varies from ∼3 to ∼9 across
different kinematic components.
The intrinsically symmetric line profile and a smooth source-plane velocity
gradient suggest that
the host galaxy is an extended rotating disk, with a CO size of R_ CO∼6 kpc and
a dynamical mass of M_ dyn810 .
We also find a secondary CO-emitting source near RXS J1131-1231 whose location is
consistent with the optically-faint companion reported in previous studies.
The lensing-corrected molecular gas masses are M_ gas(1.40.3)10 and (2.00.1)9 for RXS J1131-1231 and the companion, respectively.
We find a lensing-corrected stellar mass of
M_*(31)10 and a star formation rate of SFR_ FIR(12063) ,
corresponding to a specific SFR and star formation efficiency comparable to
z1 disk galaxies not hosting quasars.
The implied gas mass fraction of ∼184% is consistent with the previously-observed cosmic decline since z2. We thus find no evidence for quenching of star formation in RXS J1131-1231. This agrees with our finding of an elevated ratio of compared to the
local value, suggesting that the bulk of its black hole
mass is largely in place while its stellar bulge is still assembling.
§ INTRODUCTION
Many recent studies of galaxy evolution have been focused on investigating the interplay between star formation and active galactic nucleus (AGN) activity across cosmic epochs <cit.>.
It is currently not well-understood when and how the supermassive black holes (SMBHs) and
stellar populations of present-day massive galaxies were assembled,
but it is clear that the co-moving rate and the black hole accretion rate densities both
increased substantially since z > 3 and reached their climax at z2, followed by
a rapid decline toward z0 <cit.>.
A leading explanation for this decline is the decrease in molecular gas content and
efficiency <cit.>,
but direct molecular gas measurements at intermediate redshift
(0.2 < z < 1) that could confirm this explanation remain largely limited to
spatially unresolved CO observations of
a modest sample of ∼30 ultra-luminous infrared galaxies <cit.>.
Meanwhile, empirical scaling relations such as the
M_ BH-relation <cit.>
have been established locally, suggesting a co-eval growth between local SMBHs and their host galaxies.
Attempts to extend this relation out to higher redshifts, beyond the peak epoch
of and AGN activity, have been made in recent years.
These studies find that
high-z AGN host galaxies do not appear to follow the same M_ BH-relation as nearby spheroidal galaxies. <cit.>.
Yet, the M_ BH-relation remains poorly-constrained
at intermediate redshifts due to the difficulty
in separating the stellar component contributing to the optical emission from that of the bright AGN.
This stems from the limited resolving power, which restricts the dynamic range that can be achieved at positions near the AGN.
Strong gravitational lensing provides the magnification necessary to spatially separate the AGN emission from the host galaxy stellar emission, significantly improving the dynamic range that can be achieved in studies of their host galaxies with current instruments.
The quasar RXS J113151.62-123158 (hereafter RXJ1131)
at z_s, QSO0.658 <cit.> is a particularly well-suited source for
studying the evolution of molecular gas properties in quasar host galaxies and the
connection between SMBHs and their host galaxies at intermediate redshift
given its unique lensing configuration.
The stellar emission in the host galaxy of RXJ1131 is lensed into
an Einstein ring of 183 in radius
that is clearly separated from the quadruply imaged quasar emission <cit.>.
The foreground lens is an elliptical galaxy at z_L0.295 .
<cit.> report a high spin parameter of a ∼ 0.9 for the moderate-mass black hole in RXJ1131 <cit.>,
with an intrinsic bolometric luminosity of L_ bol, X1.345 ergs s<cit.>.
In this paper, we present and line observations and
broadband photometry spanning rest-frame UV to radio wavelengths towards RXJ1131 to
study the properties of its molecular gas, dust and stellar populations.
In obs, we outline details of the observations and of our data reduction procedures.
In results, we report results for the and emission and broadband
photometry in RXJ1131.
In anal, we present lens modeling and dynamical modeling of the data and
spectral energy distribution (SED) modeling of the photometric data.
In diss, we discuss
the ISM properties of the host galaxy of RXJ1131 and compare
them to other galaxy populations at low and high redshift.
Finally, we summarize the main results and present our conclusions in sum.
We use a concordance ΛCDM cosmology throughout this paper, with
parameters from the WMAP9 results:
H_0 = 69.32 Mpc, Ω_ M = 0.29, and
Ω_Λ = 0.71 <cit.>.
§ OBSERVATIONS
§.§ CARMA
Observations of the rotational line
(ν_ rest = 345.79599 GHz) towards RXJ1131 at z_ s, QSO0.658
were carried out with the (CARMA;
Program ID: cf0098; PI: D. Riechers) in the D array configuration.
The line frequency is redshifted to ν_ obs209.10443 GHz at the quasar redshift.
Observations were carried out
on 2014 February 02 under poor 1.5 mm
weather conditions and on 2014 February 17 under good 1.5 mm
weather conditions.
This resulted in a
total on-source time of 2.94 hours after flagging poor
visibility data.
The correlator setup provides a bandwidth of 3.75 GHz in
each sideband and a
spectral resolution of 12.5 MHz (∼17.9 ). The
line was placed in the lower sideband with the local oscillator tuned to ν_ LO∼216 GHz. The radio quasars J1127-189 (first track) and 3C273
(second track) were observed
every 15 minutes for pointing, amplitude, and phase calibration. Mars was
observed as the primary absolute flux calibrator and 3C279 was observed as
the bandpass calibrator for both tracks.
Given that the phase calibrator used for the first track was faint and was
observed under poor weather conditions and that the phase calibrator used for
the second track was far from our target source, the phase calibration is
subpar, with an rms scatter ∼50 over a baseline length of ∼135 m.
We thus conservatively estimate
a calibration accuracy of ∼40% based on the flux scale uncertainties,
the gain variations over time, and the phase scatter on the calibrated data. We
therefore treat the derived line intensity with caution and ensure that our physical interpretation
of this system and the conclusion of this paper do not rely on this quantity.
The miriad package was used to calibrate the visibility data.
The calibrated visibility data were
imaged and deconvolved using the CLEAN algorithm with “natural” weighting. This yields a synthesized clean
beam size of 32 × 19 at a position angle (PA) of 8for the lower sideband
image cube. The final rms noise is σ = 13.3 mJy beam^-1
over a channel width of 25 MHz. An rms noise of
σ = 0.83 mJy beamis reached by averaging over the
line-free channels in both sidebands.
§.§ PdBI
Observations of the rotational line
(ν_ rest = 230.53800 GHz)
towards RXJ1131
were carried out using the IRAM (PdBI; Program ID: S14BX; PI: D.
Riechers).
Based on the CARMA line redshift of z_ CO(3-2)0.655,
the line is redshifted to ν_ obs139.256 GHz.
Two observing runs were carried out on 2014 December 06 and 2015
February 05 under good weather conditions in the C and D array configurations,
respectively.
This resulted in 3.75 hours of cumulative six antenna-equivalent on-source
time after discarding unusable visibility data.
The 2 mm receivers were used to cover the redshifted line
and the underlying continuum emission, employing a correlator setup that provides
an effective bandwidth of 3.6 GHz (dual polarization) and a native spectral resolution of 1.95 MHz
(∼ 4.2 ).
The nearby quasars B1127-145 and B1124-186 were observed every 22 minutes
for pointing, secondary amplitude, and phase calibration, and B1055+018 was
observed as the bandpass calibrator for both tracks.
MWC349 and 3C279 were observed as primary flux calibrators for the C and D
array observations, respectively, yielding calibration accuracy better than 15%.
The gildas package was used to calibrate and analyze the visibility data.
The calibrated visibility data were imaged and deconvolved using the CLEAN algorithm with “natural”
weighting. This yields a synthesized clean beam size of 444 × 195 (PA = 13).
The final rms noise is σ = 1.45 mJy beamover 10 MHz (21.5 ). The continuum image at ν_ cont∼139 GHz
is produced by averaging over 3.16 GHz of line-free bandwidth. This
yields an rms noise of 0.082 mJy beam^-1.
§.§ Karl G. Jansky Very Large Array (Archival)
Our analysis also uses archival data of the 4.885 GHz
radio continuum obtained with the
Karl G. Jansky Very Large Array (VLA; Program ID: AW741; PI: Wucknitz).
Observations were carried out on 2008 December 29 under excellent weather
conditions in the A array configuration for a total of ∼7 hours on-source time. The C-band receivers were used with a continuum mode setup,
providing a bandwidth of 50 MHz for the two IF bands with full polarization.
The nearby radio quasar J1130-149 was observed every 10 minutes for
pointing, amplitude, and phase calibration. J1331+305 was observed as the
primary flux calibrator, and J0319+415 was observed as the bandpass
calibrator, yielding ∼10% calibration accuracy.
We used aips to calibrate the visibility data.
The calibrated visibility data were imaged and deconvolved using
the CLEAN algorithm with robust = 0, which
was chosen to obtain a higher resolution image given high SNR.
This yields a synthesized clean
beam size of 049 × 035 (PA = 0.18) and a final
rms noise of σ = 13 μJy beam.
§.§ HST (Archival)
We obtained an HST image taken with
the ACS
using the F555W filter (V-band)
from the
Hubble Legacy Archive.
The details of the observations can be found
in .
We apply an astrometric correction to the optical image by adopting the VLA 5 GHz map as the
reference coordinate frame.
We shift the latter to the east by 05963 in R.A. and +08372 in
Dec., which is consistent with the typical astrometric precision (1^''-2^'') of
images from the Hubble Legacy Archive[http://hla.stsci.edu/hla_faq.
html]. This astrometric correction is critical to avoid artificial spatial
offsets between different emitting regions and to carry out our lens modeling,
in which the absolute position of the foreground lensing galaxy is based on
its coordinates in the high-resolution optical image.
The VLA image is calibrated using a well-monitored phase
calibrator, with absolute positional accuracy of ∼2 mas.
For this reason, the absolute alignment between the VLA image and other
interferometric images reported in this paper are expected to have an astrometric
precision better than 01, modulo uncertainties related to the SNR and phase
instability.
§ RESULTS
§.§ Emission
We detect line emission towards the background source RXJ1131 in the PdBI data
at ≳ 27σ significance. Based on this measurement, we refine the redshift of RXJ1131 to
z_ CO0.6541 ± 0.0002[This redshift is derived by fitting a double-Gaussian to
the de-lensed spectrum (delensed)
instead of the observed spectrum (CO21spec) to avoid biases in our
redshift determination
due to differential lensing (see differential).
]. The emission is spatially and kinematically resolved
with a highly asymmetric double-horned line profile
as shown in CO21spec. Fitting a double Gaussian results in peak
flux densities of 75.32.6 mJy and 24.02.0 mJy, and a FWHM of
1799 and 25528 for the two components, respectively. The peaks are separated by
Δ v_ sep40012 . The total integrated line flux is 20.3 ± 0.6 Jy .
lcc[!htbp]
3
Observed Properties of RXJ1131 and its companion
Parameter
Unit
Value
z_ CO(2-1) 0.65410.0002
[0.5ex]
I_ CO(2-1) Jy km s 20.30.6
[0.5ex]
S_ CO(2-1) Jy km sbeam 8.120.30
[0.5ex]
FWHM_ CO(2-1) km s 1799, 25528
[0.5ex]
FWHM_ CO(2-1) km s 22072
[0.5ex]
I_ CO(3-2) Jy km s 35.76.9
aPeak flux density in the intensity map.
bFrom fitting a double Gaussian to the observed spectrum (CO21spec).
cFrom fitting a double Gaussian with a common FWHM to the de-lensed spectrum (delensed).
We construct a zeroth order moment map, red/blue channel maps, and
first and second moment maps, as shown in CO21mom,
using the uv-continuum subtracted data cube over a velocity range of
Δ v ∼ 750 .
The higher-order moment maps are produced using
unbinned channel maps with 3σ clipping.
The peak flux density is 8.120.30 Jy beamin the intensity map.
Observed properties of the emission line are summarized in obsprop.
The deconvolved source size FWHM obtained from fitting a single two-dimensional Gaussian
to the integrated line emission in the image plane yields 5107×3707,
which is consistent with that obtained by visibility-plane fitting within the uncertainties.
Since the spatial distribution of the observed CO emission is unlikely to be fully described by a simple Gaussian
and appears to be a superposition of at least two components (top left panel of CO21mom),
we also fit two Gaussians to the intensity map.
This yields deconvolved source sizes of
3804×1904 and 3603×1503, separated by ∼22 in RA and ∼17 in Dec.
The deconvolved source sizes of both models suggest that the gravitationally lensed CO emission is more extended
than the optical
“Einstein ring”, which has a diameter of ∼36
(i.e., the “Einstein ring” formed by CO emission is likely to have a larger diameter compared to the optical one).
This is consistent with the centroid position of the redshifted emission, which is along the quasar arc seen in the optical image,
and the blueshifted emission, which is offset further to the SE (top right of CO21mom).
Therefore, the CO-emitting region in RXJ1131 is likely to be more extended than its stellar and quasar emission.
We also place an upper limit on [HNC]21 line emission
in the foreground galaxy at z∼0.295.
Assuming a typical line width of 300 , this corresponds to a 3σ
limit of 0.35 Jy beam.
§.§ Emission
We detect line emission towards RXJ1131 in the CARMA data at ≳ 5σ significance.
The spectrum appears to be consistent with a double-peaked profile, as shown in co32spec, where
we over-plot spectra of the and lines.
We extract the peak fluxes and their corresponding uncertainties for the blue and red wing independently.
We find a peak line flux of 5.131.43 Jy beamfor the blue wing, indicating a ≳3σ detection for this component alone, and a peak line flux of 11.451.99 Jy beamfor the red wing,
indicating a ∼ 6σ detection.
We measure a line intensity of 35.7 ± 6.9 Jy (obsprop) by summing up fluxes over the FWZI
linewidth used to infer the line intensity.
Assuming that the spatial extent of and is similar and therefore the emission is
magnified by the same amount, the measured line intensities
correspond to a brightness temperature ratio of
r_ 32 = T_/T_ = 0.78 ± 0.37.
The quoted error bar is derived by adding the uncertainties associated with the CO line intensities
and those from absolute flux calibrations in quadrature.
This brightness temperature ratio is consistent with
thermalized excitation within the uncertainties, as commonly observed in nuclear regions of
nearby ULIRGs and high-z quasars <cit.>, but also
with the lower excitation seen in normal star-forming disks <cit.>.
§.§ Continuum Emission
No 1.4 mm continuum emission is detected at the position of down to a 3σ limit of 2.49 mJy beam.
This is consistent with the spectrum shown in co32spec.
We detect 2.2 mm continuum emission at an
integrated flux density of
1.20.2 mJy, with a peak flux of
S_ν = 79988 μJy beamcentered on the lensing galaxy (cont).
Slightly extended emission
along the lensing arc is also detected.
This suggests that we detect emission in both
the foreground and the background galaxy and that the
emission is marginally resolved along its major axis.
We subtract a point source model in the visibility-plane to remove the unresolved part of the
emission, which we here assume to be dominated by the foreground galaxy. The emission
in the residual map coincides spatially with the lensing arc. We measure a flux density
of S_ν0.39 ± 0.08 mJy for this residual component.
This flux density is consistent with
the difference between the integrated and the peak flux density measured in the
original continuum map (∼0.4 mJy).
We therefore adopt S_ν0.390.12 mJy as the best estimate for the 2 mm continuum flux of
the background galaxy (RXJ1131).
We here quote a conservative error bar, which is derived by adding the uncertainty
associated with the flux density of the
point-source model (δ S_ν0.088 mJy) with
that of the peak flux in the residual map (0.08 mJy)
in quadrature. We caution that this does not account for the systematic uncertainties of the
de-blending procedure, where we have assigned 100% of the point source flux to the foreground galaxy.
We report the peak flux in the original map
(S_ν = 79988 μJy beam) for the foreground galaxy, which
is the best estimate possible at the resolution of our observations, but we acknowledge that a non-negligible contribution from the background source to the peak flux cannot be ruled out.
The VLA C-band continuum image in cont shows resolved emission from the
jets and the core of the foreground elliptical galaxy
as well as emission toward the background quasar.
Multiple peaks are seen along the arc with their centroids
coincident with the optical emission from the quasar.
We extract the flux densities for the lensing arc and the radio core in photometry.
We find a spectral index of α^ 2mm_ 6cm = -0.020.07
for the foreground galaxy and α^ 2mm_ 6cm = -0.350.21
for the background galaxy by fitting
power-laws (S_ν∝ν^α) to their continuum fluxes at
5 GHz and 2 mm.
The spectral slope derived for the background source is flatter than the typical slope of pure synchrotron emission <cit.>. This likely suggests
that at least a fraction of the observed 2 mm emission arises from thermal dust emission.
This spectral slope would be even shallower
if the background source contributes to the unresolved fraction of the
2 mm flux.
In this case, the 2 mm flux of the foreground galaxy would be lower than the value reported here and
lead to a slope steeper than α^ 2mm_ 6cm-0.02, which is flatter than
that typical of elliptical galaxies.
Assuming a spectral slope of α-0.7 to account for synchrotron radiation in RXJ1131, we expect
a flux density of S_ 2mm0.1220.004 mJy at 2 mm.
The flux excess of S_ 2mm0.270.08 mJy therefore likely arises due to thermal dust emission.
§.§ Photometry
We compile (MIR) to broadband photometry from various
catalogs available on the NASA/IPAC Infrared Science
Archive (IRSA) in photometry with aperture corrections
when warranted. These data were obtained using
the Cerro Tololo Inter-American Observatory (CTIO) for the Two Micron All Sky Survey <cit.>,
the Wide-field Infrared Survey Explorer <cit.>,
the Infrared Astronomical Satellite
<cit.>, and
the Multiband Imaging Photometer <cit.> and
Mid-infrared Infrared Array Camera <cit.> on
the .
We retrieve PBCD (level 2) Spitzer/IRAC images from the
Spitzer Heritage Archive and perform aperture photometry on
the channel 1 image to extract the flux density at 3.6 μm
since it is not available from the IRSA archive.
The emission in the IRAC images is slightly extended. We thus use an
HST image (∼007 resolution) to determine the
origin of their centroids, all of which are found to be
centered at the position corresponding to the lensed emission from the
background galaxy. To recover the diffuse background emission, we subtract a
point source model centered on the lensing galaxy, using the average
FWHM found by fitting a Gaussian profile to several field stars
with the imexam routine of IRAF.
We perform aperture photometry on the residual image
to obtain decomposed flux measurements of the background galaxy.
The photometry for the foreground galaxy is then obtained
by subtracting the background emission from the
observed total flux. The resulting photometry in
photometry is obtained after performing an aperture correction
described in the IRAC Instrument Handbook[http://irsa.ipac.caltech.edu/data/SPITZER/docs/irac/iracinstrumenthandbook/] to
correct for the fact that the imaging was calibrated
using a 12^'' aperture, which is larger than the aperture (58) we used to
perform aperture photometry.
We fit a power-law spectrum to the
decomposed IRAC photometry to disentangle the background and foreground
emission from the total flux observed in the MIPS 24 band.
The spectral indices corresponding to the best-fitting curves are α = -1.8 and
α = -0.85 for the lensing galaxy and RXJ1131, respectively.
The latter
is consistent with the mean 3.6 - 8
spectral slope of
α = -1.07 ± 0.53 found for unobscured AGN
<cit.>. An extrapolation of the fit to 24
yields 33.96 ± 0.01 mJy and 25.19 ± 0.03 mJy
for the foreground galaxy and RXJ1131, respectively.
The uncertainties are the standard deviations of
the extrapolated fluxes obtained from two independent Monte Carlo
simulations, each of 500 iterations.
We incorporate the decomposed 24 data in our
SED fitting to provide some constraints on
the Wien tail beyond the dust peak
of the SED of RXJ1131.
Details of the SED modeling are presented in SED.
Extraction of the Herschel/SPIRE photometry at 250, 350, and 500 was
carried out using sussextractor within the Herschel Interactive
Processing Environment <cit.>
on Level 2 maps obtained from the Herschel Science Archive.
These maps were processed by the SPIRE pipeline
version 13.0 within HIPE. The sussextractor task estimates
the flux density from an image convolved with a kernel
derived from the SPIRE beam. The flux densities
measured by sussextractor were confirmed by
using the Timeline Fitter, which performs photometry
by fitting a 2D elliptical Gaussian to the Level 1 data at the
source position given by the output of sussextractor. The fluxes
obtained from both methods are consistent within the uncertainties.
lccc[!htbp]
4
Photometry data
Wavelength Frequency Flux Density Instrument
() (GHz) (mJy)
1-4
4cCombined/Unresolved
1.25 239834 1.009 ± 0.090 CTIO/J-Band
1.65 181692 1.448 ± 0.121 CTIO/H-Band
2.17 138153 2.064 ± 0.160 CTIO/Ks-Band
3.4 88174.2 7.027 ± 0.142 WISE/W1
3.6 83275.7 5.618 ± 0.002 Spitzer/IRAC
4.5 66620.5 7.803 ± 0.002 Spitzer/IRAC
4.6 65172.3 8.872 ± 0.163 WISE/W2
5.8 51688.4 10.720 ± 0.005 Spitzer/IRAC
8.0 37474.1 14.470 ± 0.004 Spitzer/IRAC
12 24982.7 21.960 ± 0.425 WISE/W3
12 24982.7 <400 IRAS
22 13626.9 55.110 ± 1.878 WISE/W4
24 12491.4 70.204 ± 0.026 Spitzer/MIPS
25 11991.7 < 500 IRAS
60 4996.54 < 600 IRAS
100 2997.92 < 1000 IRAS
250 1199.17 289.4 ± 9.6 Herschel/SPIRE
350 856.55 168.2 ± 8.6 Herschel/SPIRE
500 599.585 56.8 ± 8.8 Herschel/SPIRE
1387.93 216 <2.49 CARMA
2152.82 139.256 1.23 ± 0.22 PdBI
Foreground Lensing Galaxy (deblended bands)
[-1.5ex]
0.555 540167 0.056 ± 0.006 HST-ACS/V-Band
0.814 368295 0.238 ± 0.013 HST-ACS/I-Band
1.6 187370 0.539 ± 0.041 HST-NICMOS(NIC2)/H-Band
3.6 83275.7 0.585 ± 0.003 Spitzer/IRAC
4.5 66620.5 1.794 ± 0.003 Spitzer/IRAC
5.8 51688.4 3.163 ± 0.006 Spitzer/IRAC
8.0 37474.1 4.589 ± 0.006 Spitzer/IRAC
2152.82 139.256 0.799 ± 0.082 PdBI
61414 4.8815 0.866 ± 0.027 VLA
Background Galaxy RXJ1131 (deblended bands)
0.555 540167 0.009 ± 0.004 HST-ACS/V-Band
0.814 368295 0.041 ± 0.005 HST-ACS/I-Band
1.6 187370 0.133 ± 0.004 HST-NICMOS(NIC2)/H-Band
3.6 83275.7 5.034 ± 0.002 Spitzer/IRAC
4.5 66620.5 6.009 ± 0.002 Spitzer/IRAC
5.8 51688.4 7.557 ± 0.003 Spitzer/IRAC
8.0 37474.1 9.881 ± 0.004 Spitzer/IRAC
2152.82 139.256 0.39 ± 0.12 PdBI
61414 4.8815 1.273 ± 0.042 VLA
The IRAC photometry for channel 1 (3.6 ) is extracted directly from the image and
from the Spitzer Heritage Archive for channels 2-4 (4.5, 5.8, and 8.0 ). The flux uncertainties quoted for radio and mm observations (PdBI, CARMA, and VLA) do not include those from absolute flux calibration.
All upper limits are 3σ.
aFlux obtained using aperture photometry after subtracting the emission of RXJ1131 from the total emission.
bA contribution from the quasar has been removed (see C06), and thus the flux density corresponds to the host galaxy only.
cFlux extracted from the residual map after subtracting a point-source model. For SED modeling, we use S_ 2mm0.270.08 mJy to exclude synchrotron emission (see deblend).
The HST photometry is adopted from C06.
§ ANALYSIS
§.§ Lens Modeling
At the angular resolution of the data, the source is resolved
≳2 resolution elements.
Given the extent of the lensed emission (see CO21mom),
this implies that we do not resolve
structures (e.g. knots and arcs) of the lensed emission
in our data.
Nevertheless, the high spectral
resolution of these data provides kinematic information on
spatial scales smaller than the beam (see CO21mom).
Hence, we reconstruct the intrinsic line profile and source-plane velocity structure
by carrying out a parametric lens modeling over different
channel slices of the interferometric data using our lensing code
(; see for details of the code).
Our approach follows a similar strategy as <cit.>, who reconstruct a source-plane
velocity gradient and constrain the gas dynamics in the z > 4 quasar host galaxy of
PSS J2322+1944,
which is also lensed into an Einstein ring configuration.
To ensure adequate SNRs for lens modeling, we bin the frequency channels by a factor of five
to produce seven independent Δ v ∼ 105 channels (dashed line in delensed)
that cover the full linewidth of ∼750 .
lcc[htbp]
3
Lens parameters constrained by models of seven velocity channels
2cParameters
Median values
Offset in RA () 0.004±0.027
Offset in Dec () 0.003±0.027
Axial Ratio 0.56±0.16
Position Angle (deg) 103±22
Einstein Radius () 1.833±0.002
aThis corresponds to mass of M(θ < θ_ E)(7.420.02)11within the Einstein radius.
Parameters describing the foreground lens are
obtained based on the median in the preliminary models (see text for details).
All angular offsets are with respect to
α = 11^ h31^ m5144,
δ = -1231583 (J2000).
We model the lens mass distribution using a singular isothermal
ellipsoid (SIE) profile, which is described by five free parameters: the
positional offset in R.A. and Dec. relative to an arbitrary chosen
fixed coordinate in the image, the Einstein radius, the axial ratio, and the
position angle.
Positional offset between the foreground galaxy and the pre-defined coordinate is initialized
using the VLA radio continuum map.
We impose a
uniform prior of ±005 in both ΔR.A. and ΔDec.,
motivated by the astrometry uncertainties in the VLA image as well as
the uncertainties provided by previous SIE lens model .
We initialize the Einstein radius based on the model parameters reported by
and impose a uniform prior using ±3σ of their uncertainties.
The sources are modeled using elliptical Gaussian profiles, which are
parameterized by six free parameters: the positional offset in R.A.
and Dec. relative to the lens, the intrinsic flux density, the effective
radius, the axial ratio, and the position angle. The position of each source
is allowed to vary between ±15 (i.e., within the Einstein radius)
and the effective radius is allowed to vary from 001-2^''.
Our code uses an Markov Chain Monte Carlo (MCMC) approach to sample the
posterior probability distribution function (PDF) of the model parameters.
In each model, we require a target acceptance rate of ∼0.25-0.5
and check for chain convergence by inspecting trace plots
and by requiring that the samples are obtained beyond at least an autocorrelation time.
We thus employ ∼50,000 samples as the initial “burn-in” phase
to stabilize the Markov chains (which we then discard) and
use the final ∼5,000 steps, sampled by 128 walkers, to identify
the posterior. Here, we
identify the best-fit model and the quoted uncertainties using the
median and the 68% confidence intervals in the marginal PDFs.
We first obtain a preliminary lens model for each channel slice independently,
where their lens parameters are allowed to vary and are initialized according
to the aforementioned way. We obtain the final model
by repeating the modeling over each slice but fixing their lens parameters
to the overall median in the preliminary models,
as listed in lens.
This ensures that all models share the same lens profile.
The magnification factors in model are determined by taking the ratio
between the image plane flux and the source plane flux of each model.
Our model parameters in lens, describing
the mass distribution of the lensing galaxy, are consistent (within the uncertainties)
with that of the SIE model presented by . We find a mass of
M(θ < θ_E) = (7.47 ± 0.02) × 10^11 within the Einstein radius.
§.§.§ Interpretation of the Source-plane Morphology
The reconstructed source locations, as represented by magenta ellipses in model, demonstrate
an intrinsic velocity gradient across the source plane, which is
consistent with a kinematically-ordered disk-like galaxy.
Additional support to the disk conjecture
can be found in the double-horned line profile (CO21spec)
and the observed (image plane) velocity field (CO21mom). Furthermore,
also find that the reconstructed source plane emission in optical-NIR
is best-reproduced using a n = 1 Sersic profile.
We thus interpret RXJ1131 as a disk galaxy.
A better fit is found for the lens model of
the red-most channel if we add a second source component (see
top left panel in model). This is consistent with previous results
reported by <cit.>, who find an optically faint companion
(component F in their paper) ∼2.4 kpc in projection from the AGN host galaxy in V-band,
and with , who find evidence for an interacting galaxy near RXJ1131.
Spatially, the red velocity component of the CO emission
also consistent with this component F. It is therefore likely that we
detect emission in a companion galaxy.
We decompose the total line flux into two components:
one from RXJ1131 and the other from its companion.
Since the companion is only detected in the red-most channel, we
derive its intrinsic gas mass using the best-fit flux
densities and magnification factors obtained from the models of this channel.
Assuming a brightness temperature ratio
of r_ 21 = 1 between and lines and
a CO luminosity-to-H_2 mass conversion factor of
= 0.8 , we find
a molecular gas mass of M_ gas = (1.920.09) 9 .
For the molecular gas mass in RXJ1131, we derive
its intrinsic line flux over the FWZI linewidth
using the respective magnification
factors listed in model, which to
first order takes into account the effect of differential lensing.
This yields I_ = 2.930.70 Jy ,
where the uncertainty includes those on
the magnification factors.
Adopting the same brightness temperature ratio and as
used for the companion, this corresponds to a gas mass of
M_ gas = (1.380.33) 10 , which
implies a gas mass ratio of ∼7:1 between RXJ1131 and its companion.
The spatial resolution of the data in hand
is a few arcsec, which implies that despite the high SNR and spectral
resolution, constraints on the intrinsic sizes of the lensed galaxies are modest, and thus the magnification
factors may be under-predicted <cit.>.
§.§.§ Spatial Extent and Differential Lensing
In the image-plane integrated line map shown in CO21mom, the redshifted component is
cospatial with the Einstein ring that is seen in the
optical image, with most of its apparent flux originating along the lensing arc,
whereas centroid of the blueshifted emission is offset to the SE of
the lensing arc. This suggests that the CO-emitting region in RXJ1131 is extended.
To further illustrate this, we show the
channel maps of 21.5 width and a spatial spectra map of 15 resolution in
chanmap and spatialSpec, respectively. These figures
show that redshifted emission
is present to the west, peaking toward the lensing arc (black crosses in
chanmap), and shifts to the east with decreasing velocity
(blue wing).
This is consistent with the source plane positions in our models and
is suggestive of an extended CO emitting region.
lcc[!htbp]
3
Magnification factors of various kinematic components in
Velocity Range ()
Source 1 μ_ L
Source 2 μ_ L
346 - 260 6.7 ± 2.5 7.2 ± 5.6
238 - 153 7.6 ± 1.6
[0.5ex]
131 - 45 8.7 ± 2.0
[0.5ex]
24 - -62 4.1 ± 0.9
[0.5ex]
-84 - -170 4.2 ± 0.6
[0.5ex]
-191 - -277 4.3 ± 2.4
[0.5ex]
-300 - -385 3.1 ± 0.9
[0.7ex]weighted average 4.4 0pt1
[0.5ex]
median 5.5
First column corresponds to the rest-frame velocity ranges taken from the center of an unbinned channel
(see delensed). Each row corresponds to a (binned) channel slice used for
lens modeling. Source 1 is RXJ1131 and source 2 is its companion.
Previous studies of RXJ1131 find evidence for differential lensing across
the HST V-, I-, and H-bands, where the
magnification factor varies from 10.9 to 7.8 .
This indicates that the emission from different stellar populations
within the host galaxy have various spatial extents and positions with respect to the caustic.
The best-fit lens models obtained here for different CO channels show that differential lensing also plays
an important role in the observed emission, with a magnification factor (μ_ L) that varies
from 8.7 to 3.1 across different kinematic components (model).
The asymmetry in the line profile (CO21spec and delensed) is therefore predominantly a result of
the redshifted CO-emitting gas being more strongly-magnified
than the
blueshifted component.
A secondary reason is likely due to
the inclusion of the emission of the companion in the most redshifted velocity channels.
The variation in μ_ L found across channels is consistent with the source plane
positions relative to the caustics in model, where the red wing
emission mainly originates near the cusp
of the caustic and the blue wing emission is located beyond the caustics.
In fact, the intrinsic line flux of the redshifted and
blueshifted emission in RXJ1131 (after subtracting a contribution from the companion)
are I_ = 1.260.23 Jy and 1.250.23 Jy , respectively,
implying an intrinsically symmetric line profile (delensed). This is consistent with the symmetric source-plane
velocity gradient in our lens model (model and PV).
§.§ Kinematics
Fitting two Gaussians with a common FWHM
to the “intrinsic” line profile of RXJ1131 (after correcting for lensing using
the magnification factors for various channels and separating the emission from RXJ1131 and its companion),
we find a roughly symmetric double-horned profile with a flux ratio of 1.20.4 between the peaks, which
are separated by
Δ v_ sep = 38745 , and each with a
FWHM of 22072 .
The peak separation obtained from this “intrinsic” line profile is
slightly lower than that obtained from the observed spectrum (without lensing corrections).
This discrepancy is likely a result of differential lensing, which causes the line peak of the red wing
to shift towards higher velocity channels, and thereby biasing the centroid of
one Gaussian to higher velocity than otherwise.
To facilitate a comparison (sizes) with previous works, which were observed at lower spectral resolution,
we also fit a single-Gaussian to the intrinsic line profiles.
This yields FWHMs of 600160 for RXJ1131
and 7343 for the companion galaxy.
A clear velocity gradient and a high
velocity dispersion (≳400 ) near the central region
are seen in CO21mom. While beam smearing is inevitably the
dominant factor in the observed velocity dispersion
at the spatial resolution of these data, the exceedingly
high velocity dispersion may hint
at potential perturbations from the AGN, or internal turbulence due to
interactions with the companion, and/or instability due to the large gas
content.
Therefore, in this scenario, RXJ1131 is
consistent with a disrupted disk galaxy hosting an optically
bright quasar and in the process of merging.
§.§ Dynamical Modeling
As discussed in caveat, we interpret RXJ1131 as a disk galaxy as it displays
a kinematically-ordered velocity gradient in the source-plane velocity map of the CO emission,
a symmetric double-horned line profile (delensed, model and, left panel of PV),
and a disk-like morphology in the source-plane reconstruction of the optical-NIR emission
.
We extract a one dimensional position-velocity (PV) profile
by assuming that the source-plane centroids of different velocity components
obtained from dynamical lens modeling
are dominated by the tangential component of the
true velocity vector of a rotating disk, i.e., each velocity component would
be seen as lying along the major axis of a rotating disk if observed with sufficiently high angular resolution
(see right panel of PV).
In this process, the positions for each velocity component (plotted as data points in the right panel of PV)
are extracted along the best-fitted major axis, which is along a PA of 121.
We then attempt to characterize the molecular gas kinematics using an
empirically-motivated disk model <cit.>:
V = V_0 + 2/π V_aarctan(R/R_t),
where V is the observed velocity, V_0 is the velocity at dynamical center,
V_a is the asymptotic velocity, and R_t is the “turnover”
radius at which the rising part of the curve begins to flatten.
We perform non-linear least squares fitting using an orthogonal distance
regression to find the best-fit parameters,
taking into account the uncertainties in both velocity (channel width) and
distance offset. We also place an upper limit on R_t <15 kpc
to keep this parameter physical <cit.>.
The parameter uncertainties are inferred based on a Monte Carlo simulation
of 500 iterations, where the input parameters are perturbed
according to random Gaussian distributions with standard deviations
corresponding to their uncertainties.
Using this model, we find V_a = 988 ± 618 ,
R_t = 10.9 ± 7.8 kpc, and V_0 = 0 ± 9 .
However, since the emission is not resolved along the flat regime
of the rotation curve, the asymptotic velocity
and the “turnover” radius are poorly constrained.
In particular, V_a and R_t are highly correlated with a
Pearson coefficient R = 0.998, and -0.400 between V_a and V_0.
The asymptotic velocity (V_a) — an extrapolation of the model
out to radius beyond the disk scale-length and half-light radius —
is not equivalent to the maximum observed velocity (V_ max),
which is commonly used in literature to parameterize disk rotation.
The arctangent model is most commonly used in studies of the
Tully-Fisher relation, where an extrapolation to V_2.2 (velocity
at 2.2 disk scale-length or ∼1.375 half-light radius,
or ∼0.7R_ opt[Radius enclosing 83% of the light
distribution.]) is typically adopted
as the rotation velocity (V_ max in their
terminology), since this corresponds to the radius at which the velocity
of a pure exponential disk peaks <cit.>.
Here, we adopt the maximum observed velocity
V_ rot = 303 ± 55 at 6 ± 3 kpc
from the
dynamical center as a proxy to the rotation velocity.
This radius corresponds to ∼0.6 R_e, where R_e is the half-light
radius ∼10.3 kpc inferred from the HST I-band
lens model (; converted to
our cosmology).
We note that the source plane half-light radius varies substantially with
wavelength. In particular, the half-light radius is found to be
∼ 4 kpc and ∼7 kpc in V-band
and H-band , respectively.
The CO gas is thus of similar spatial
extent as in the H and I-bands.
In the rest-frame,
emission in the observed-frame H-band corresponds to NIR emission (∼1 ),
tracing radiation from the accretion disk surrounding
the central AGN and also from old and evolved stellar populations;
I-band corresponds to roughly the optical V-band, tracing stellar radiation from
existing, less massive (longer-lasting) stars;
V-band corresponds to roughly U-band, tracing radiation from massive young stars
in the host galaxy. Hence,
the relative compactness observed in the V-band may be explained in part
due to the fact that the emission in this band is
more susceptible to dust extinction than in other bands and/or dominated by
a central starburst caused by higher
concentrations of star-forming gas towards the central regions — owing to
gravitational perturbations induced
from interactions with the companion
<cit.>.
This would be consistent with the picture that old stars form first and constitute the bulge component
of a spiral galaxy, and that nuclear starbursts (in the inner few kpc) can be triggered
at a later time as the progenitor disk galaxy interacts with other galaxies to form a larger bulge.
§.§ Dynamical Mass
Assuming the gas to be virialized,
the dynamical mass can be approximated by
M_ dynσ^2 R / G,
where σ is the velocity dispersion, or the rotational velocity in the case of a rotating disk model
(σ = V_ rot sin i).
Using a rotational velocity V_ rot sin i = 303 (see dynamics),
we find a dynamical mass of
M_ dyn sin^2 i (< 6 kpc) = 1.311 enclosed
within the CO-emitting region in RXJ1131.
If we instead consider the
line peak separation (Δ v_ sep/2 ∼200 ) as the rotation velocity, we find
M_ dyn sin^2 i (< 6 kpc) = 5.810 .
We derive an inclination angle of 56.4 from the
morphological axial ratio of a/b∼18/325, which we estimate
from the source-plane image reconstructed by (Figure 3 in their paper).
This corresponds to an inclination-corrected dynamical mass of
8.310< M_ dyn < 2510.
Our estimate should be considered at best an upper limit since
the gas in RXJ1131 is unlikely to be virialized.
In the following sections, we use the
lower limit (8.31.9)10 as the dynamical mass as it is
derived in a manner similar to what is commonly used in literature
(; ; ).
Using the velocity dispersion (σ = 30 ) obtained by fitting a single Gaussian to the
de-lensed line profile of the companion and a
half-light radius of R_ CO = 4.22.8 kpc from the best-fit lens model,
we find a dynamical mass of
M_ dyn = (3.52.3)9 for the companion,
assuming an inclination angle of i30.
The uncertainty here only includes that of the CO source size.
On the other hand,
we find M_ dyn sin^2 30 = 5.88 if we adopt the better-constrained V-band source size of ∼700 pc .
Since the V-band based dynamical mass measurement is substantially lower than the gas mass,
the V-band emitting region may appear to be much smaller than its true extent due
to dust obscuration.
The CO-based dynamical mass estimates correspond to a mass ratio of ∼24:1
between RXJ1131 and the companion, with a gas mass ratio of ∼7:1 derived in caveat.
We thus classify the system as a gas-rich,“wet” minor merger.
§.§ SED Modeling
We fit dust SED models to the 24 -2.2 mm photometry
using a modified-blackbody (MBB)
function with a power-law attached to the Wien side to account for the MIR excess due to
emission of warm and small dust grains.
The IRAS 60 and 100 upper limits are included to constrain the dust peak.
Here, we use a flux density of S_ 2mm0.270.08 mJy derived in deblend
instead of the deblended flux listed in photometry
to exclude a potential contribution due to synchrotron emission (see deblend) in the dust SED modeling.
An uncertainty from absolute flux calibration of ∼15% is added in quadrature
to the PdBI 2 mm continuum photometry in our fitting procedure.
The fit is performed using the code
mbb_emcee <cit.>, which samples the posterior
distributions using an MCMC approach and uses instrumental
response curves to perform color correction.
The model is described by five free parameters: the rest-frame characteristic dust
temperature (T_d), the emissivity index (β), the power-law index
(α), the flux normalization at 500 (f_ norm), and
the observed-frame wavelength at which the emission
becomes optically thick (λ_0). We impose
a uniform prior with an upper limit of 100 K on T_d <cit.>,
a Gaussian prior centered around
1.9 with a standard deviation of 0.3 on β, and a uniform prior with an upper limit of
1000 on λ_0.
We check for chain convergence by requiring that the autocorrelation
length of each parameter is less than the number of steps
taken for the burn-in phase (which are then discarded).
Here we report the statistical means
and the 68% confidence intervals in the marginal PDFs
as the best-fit parameters, as listed in SED.
The best-fit models are shown in SED along with the broadband photometry that is listed in photometry.
lccc[!htbp]
4
SED fitting results
2cParameters
With 24
Without 24
T_d (K) 54810
552120
[1.05ex]
β 1.60.50.4 2.20.30.3
[1.05ex]
α 1.60.50.6 8.57.06.2
[1.05ex]
λ_0 () 559278324 365111120
[1.05ex]
λ_ peak () 1591940 1553843
[1.05ex]
f_ norm, 500 (mJy) 551313 5966
[1.05ex]
(10^12 ) 3.811.921.97 4.242.172.00
[1.05ex]
M_ d (10^8 ) 16512 1457
aObserved-frame wavelength where τ_ν = 1
bObserved-frame wavelength of the SED peak
cObserved-frame flux density at 500
dRest-frame 42.5-122.5 luminosity
eDerived assuming an absorption mass coefficient of κ2.64 m^2 kg^-1 at λ = 125.0 <cit.>
Errors reported here are ±1σ.
and M_ d are not corrected for lensing.
In the first model, we attempt to constrain the power-law index by including
the 24 data. Based on the resulting posterior PDFs, we find an apparent
IR luminosity (rest-frame 8 - 1000 ) of 8.222.752.9812 , a
luminosity (rest-frame 42.5 - 122.5 ) of
3.811.921.9712 , and a
dust mass of 165128 , none of which are corrected for lensing magnification.
For the mass absorption coefficient, we adopt
κ = 2.64 m^2kgat rest-frame 125.0
<cit.>.
The dust mass uncertainty does not
include that of the absorption coefficient.
A fit including the MIR 24 photometry
is likely an upper limit on the luminosity due solely to in the AGN host galaxy.
If we instead fit for a model excluding this constraint,
two major consequences are immediately apparent.
First, the power-law index is poorly-constrained (see SED).
Second, the steep power-law implies only a small contribution
from the power-law regime
to the total IR luminosity as compared to the graybody component.
Thus, the luminosity in
this model should, in principle, correspond to a
lower limit on the cold dust emission.
Using the best-fit parameters
for this model, we find a total IR luminosity
(rest-frame 8 - 1000 ) of 8.675.275.2712 ,
a luminosity of 4.242.172.0012 and a
dust mass M_ dust of 14578 , all of which are not lensing-corrected.
Taken at face value, this implies an FIR-to-IR luminosity ratio
of ∼4938%.
The dust temperature from both models is similar to that of
ULIRGs at 0.6 < z < 1.0 (54 ± 5 K;
).
The luminosity is comparable in both models, which is
not surprising given the lack of constraints in the MIR.
For the subsequent analysis, we adopt the physical quantities
from the first model (with constraints at 24 ).
The choice of SED model does not affect
the derived star formation rate (SFR) given the similar luminosity, and
their dust masses are consistent within the uncertainties.
We correct for lensing using the median magnification
factor (μ_ L5.5)
from the CO lens models. This yields a
of (6.93.6)11 and
an intrinsic total IR luminosity of ∼1.512 (5.5/μ_ L) , implying that RXJ1131 can be classified as an ULIRG.
Assuming a <cit.> initial
mass function (IMF), we find a
SFR_ FIR of 12063 using a
standard conversion <cit.>.
We derive the stellar mass of RXJ1131 by
fitting SED models to the rest-frame UV-to-mm photometry
using the high-z version of the magphys code <cit.>.
Two sets of stellar templates modeled using either the <cit.> or the unpublished
Charlot & Bruzual 2007 stellar population synthesis code
are provided in the magphys package.
We adopt the former set.
To minimize contaminations from the quasar, we only fit to the HST, Herschel, and PdBI data,
where both the HST and the PdBI 2 mm photometry are de-blended from the AGN
(see bottom section of photometry).
The input photometry are corrected for lensing using their respective magnification factors
to account for differential lensing (light blue circles in SED).
We thus find a stellar mass of M_*2.951.320.8610 , which
is the median value of the posterior probability distribution and
the uncertainties are derived from the 16^ th and 84^ th percentiles.
We note that the models are over-fitted with a best-fit χ^20.41
which is unsurprising due to sparse sampling of the SED
compare to the number of free parameters.
The resulting dust mass and IR luminosity are consistent with those obtained
from the MBB+power-law models within the uncertainties, albeit some differences
in the assumptions behind the two methods.
The consistency may be attributed to the large uncertainties arising from the lack of photometric constraints on the models and the fact that the best-fit parameters from the MBB method are similar to those of the magphys method.
§ DISCUSSION
lcc[!htbp]
3
Physical properties of RXJ1131 and its companion
Parameter
Unit
Value
r_ 32 0.780.37
[0.5ex]
FWHM_ CO(2-1), RXJ1131 22072
[0.5ex]
FWHM_ CO(2-1), RXJ1131 600160
[0.5ex]
FWHM_ CO(2-1), companion 7343
[0.5ex]
M_ gas, RXJ1131 10^10 1.380.33
M_ gas, companion 10^9 1.920.09
[0.5ex]
R_ CO, RXJ1131 kpc 6.23.0
[0.5ex]
R_ CO, companion kpc 4.22.8
[0.5ex]
M_ dyn, RXJ1131 10^10 8.31.9
[0.5ex]
M_ dyn, companion 10^9 3.52.3
f_ gas % 184
[0.5ex]
f_ mol % 3416
[0.5ex]
L_ IR 10^12 ∼1.5
[0.5ex]
L_ FIR 10^11 6.93.6
[0.5ex]
SFR_ FIR yr 12063
[0.5ex]
M_ dust 10^8 ∼3
[0.5ex]
GDR 5413
[0.5ex]
τ_ depl Myr 10225
[0.5ex]
M_* 10^10 3.01.0
[0.5ex]
M_ BH 10^7 ∼ 8
[0.5ex]
M_ BH/M_ bulge % >0.270.110.08
All the parameters have been corrected for lensing magnification. The physical parameters are derived for RXJ1131 and the companion as a single system unless otherwise stated.
aFrom fitting a double Gaussian with a common FWHM to the de-lensed spectrum.
bFrom fitting a single Gaussian to the de-lensed spectrum.
cExcluding systematic uncertainties.
dExcluding uncertainties in the dynamical masses.
e<cit.>.
§.§ ISM Properties
In this section, we derive the gas properties of the merging system RXJ1131
based on and compare them with those reported by
[The
luminosity in is derived based on 60 and 100 IRAS fluxes,
and using a different definition of
: rest-frame 40 - 500 . Following this convention,
we find a luminosity of
= (8.80.4)11(μ_ L/5.5) and
a SFR of (15070) for RXJ1131.] — the largest sample of CO-detected ULIRGs at similar redshift
(0.6 < z < 1.0).
Their results are based on spatially unresolved and 43 line observations with the
IRAM 30-m single-dish telescope.
§.§.§ Linewidths and Sizes
The FWHM linewidth of Δ v_ ∼ 600160 found
for RXJ1131 by fitting a single Gaussian
is considerably larger than the statistical average in the sample
(370 ) and
local ULIRGs .
Linewidths exceeding 500 are also commonly observed in
high-z starburst galaxies
and high-z quasar host galaxies <cit.>,
which are believed to originate from mergers.
The wider CO linewidth observed in RXJ1131 also supports a merger picture.
The CO gas in RXJ1131 is ∼63 kpc in radius (in the source plane),
which is more
extended than the average of 3.52.3 kpc in a sample of disk-like U/LIRGs studied by
<cit.>,
but consistent with their range of 1.1 - 9.3 kpc.
Our CO size is also consistent with that of high-z
(z > 1) galaxies <cit.> and
local U/LIRGs in the <cit.> sample (R≲10 kpc).
§.§.§ Gas Mass Fractions and Gas-to-dust Ratio
We find a dynamical gas mass fraction of f_ gasM_ gas/M_ dyn184%
and a baryonic gas mass fraction of f_molM_ gas/(M_ gas + M_*)3416% for the merger system (i.e., RXJ1131 and companion).
Recent studies find that the baryonic gas fraction of starburst galaxies has decreased from f_ mol40% to ≲10% between z2 and z0 <cit.>,
and from f_mol50% to ∼5% between the same redshift range for
“normal star-forming” galaxies <cit.>[These authors use the “Galactic” value of
4.6 to compute the molecular gas mass.].
Both the dynamical and baryonic gas mass fractions of RXJ1131+companion are thus consistent with the trend of decreasing molecular gas content since z2
which has been suggested as the cause for the decline in sSFR and cosmic history towards z0 <cit.>.
Using the lensing-corrected dust mass, we find a galactic-scale
gas-to-dust ratio (GDR) of
5413.
This would be higher by a factor of two if we were to adopt a dust mass from the other SED fit that is unconstrained at 24 .
This GDR is lower than the statistical average of 206
in the sample but is well within the broad
range of values measured over their entire sample (∼1-770).
Our ratio is also consistent with high-z SMGs
<cit.> and
local ULIRGs <cit.>, but lower than that of the Milky Way by
∼ 7σ <cit.>.
There are a number of systematic uncertainties associated with the derived gas-to-dust ratio, in particular
the mass opacity coefficient κ,
the conversion factor, and the brightness temperature ratio r_ 21.
If we instead use the “Galactic” value, which may be more appropriate for some ULIRGs <cit.> and minor mergers <cit.>,
the gas mass (and thus gas-to-dust ratio) would be ∼6 times higher.
We note that this gas mass is physically possible based on the dynamical mass constraints derived in dyn.
On the other hand, we would also obtain a higher gas mass if
we were to assume sub-thermal excitation between and emission.
We also note that the gas-to-dust ratio derived for RXJ1131 may be biased low as the gas is likely to
be more extended than the optically thick dust. Consequently, the overall magnification factor
for the CO gas may be lower than the optically thick dust, which dominates the luminosity.
This would lead to an overestimation of the dust mass
by adopting the CO magnification factor for the dust.
§.§.§ Star Formation Efficiency and specific SFR
To first order, the star formation efficiency
(SFE = /M_ gas) indicates the rate per unit solar mass of molecular gas available in a galaxy.
Using a wavelength range of 40 - 500 defined
in for the far-IR luminosity,
we find an SFE of 5810 M_⊙^-1,
which is on the low end among other U/LIRGs at z < 0.6
<cit.> but consistent with those of
low-z spiral galaxies <cit.> and high-z disk-like
galaxies, which are also IR luminous galaxies with ∼10^12 <cit.>.
This suggests that the merger system is converting gas into stars at an efficiency
similar to those of “normal” star-forming
disk-like galaxies rather than starburst galaxies
<cit.>.
This is in agreement with its disk-like kinematic signatures and its extended molecular gas distribution.
Assuming the continues at the current rate without gas replenishment,
the SFE corresponds to a
gas depletion time of τ = 10225 Myr.
The specific star formation rate (sSFRSFR/M_*) of 42.62.4 Gyrderived for RXJ1131
is ≲1.5σ above the main sequence according to
the redshift-dependent “main sequence” relation in <cit.>.
Given that RXJ1131 shares similar rate, efficiency, and CO disk size as other “main sequence” disk galaxies,
the small elevation in sSFR over the main sequence at z0.7 suggests
that the activity in RXJ1131 may be enhanced by interactions with the companion.
The host galaxy of RXJ1131 is an extended disk with low star formation efficiency in a minor merger
system. Therefore, removal of angular momentum of the gas via gravitational torque is likely inefficient to convert the
entire gas disk into a massive stellar bulge.
In this case, the disk component may be retained upon merging with the companion.
This scenario is consistent with the results from recent simulations, which suggest that bulge formation maybe
suppressed in gas-rich mergers, thereby allowing the formation of large disk galaxies with low bulge-to-disk ratios
<cit.>. This also supports the idea that not all mergers will transform into
elliptical galaxies, as in the classical picture <cit.>.
§.§ Systemic Redshift and Velocity Offset
<cit.> report two sets of AGN lines observed in RXJ1131.
The first set of lines is at z0.654, including the narrow component of the Balmer lines, the lines, and the absorption line; the second set is at
z_ s, QSO0.658, including the broad component of the Balmer lines and the emission line.
Using the CO line center redshift as the systemic redshift,
we find that the redshift of the first set is fully consistent with the systemic redshift. This
supports previous claims that [OIII] lines, tracing the narrow line region (NLR),
are good proxies
to the true systemic redshift <cit.>.
On the other hand, the second set of lines is redshifted by ∼715 .
Velocity offsets between broad line region (BLR) and NLR lines have been reported in literature.
<cit.> find a median offset of
∼100270 between [MgII] and [OIII] lines in a
sample of >3800 quasars,
and <cit.> report
a mean offset of
∼100210 between the broad component of
Hβ and [OIII] lines in a sample of ∼2600 quasars at 0.1<z<0.8,
where only ≲20 of them (i.e., <1%) are found to have offsets >800 and ∼1%
are found to have offsets >500 [<cit.> report the fraction of objects with offset velocities greater
than 500, 800, 1000, 1500, 2000, and 2500 . We therefore quote the two fractions corresponding to
offset velocities closest
to that of RXJ1131 (∼715 ) in this discussion.]
Thus, large velocity offsets between BLR and NLR lines
comparable to that of RXJ1131 are uncommon but have been observed in some cases.
The observed velocity offset between the BLR and NLR lines may be explained by
a recoiling black hole (BH),
where the BLR is moving at high velocity relative to the bulk of its host galaxy
<cit.>.
Depending on the initial conditions of the black hole pair (e.g., black hole mass ratio, spin-orbit orientation, spin magnitude),
numerical relativity simulations have shown that recoil velocities can reach up to
v_ kick4000 for spinning BHs,
with typical recoil velocities of v_ kick100 - 500 <cit.>.
Several sources have been proposed as recoiling BH candidates <cit.>.
However, <cit.> have recently refuted
such scenario for one of the candidates — SDSS J0927+2943 — by finding that the redshift of its
BLR lines is indeed consistent with its CO systemic redshift.
This is in contrast with RXJ1131, where our CO observations confirm
the redshifted BLR lines compared to the CO systemic redshift.
Since this scenario requires a coalesced BH, it would imply that RXJ1131
is a product of a previous merger, which is not implausible and
might also explain the highly spinning BH in RX1131 <cit.>.
Alternative scenarios
e.g., outflow/inflow of gas in the BLR, viewing angle towards the accretion disk, and
obscuration in the clumpy accretion disk
are more commonly invoked to explain velocity offsets between BLR and systemic redshift.
Since the BLR lines of RXJ1131 show positive velocity offsets with respect to its systemic redshift, it may imply that
the observed
BLR line emission is dominated by the gas that is flowing into the central BH, or by the receding component of
the accretion disk, owing to the viewing angle or the obscuration in the accretion disk.
<cit.> report a covering factor of 20% for the accretion disk in RXJ1131
based on its broad absorption line at z0.654.
Additionally, the centroids of the BLR lines in RXJ1131 may be biased towards longer wavelengths due to
microlensing <cit.>, which may have
magnified the redshifted component of the compact BLR more strongly than its blueshifted component.
§.§ The M_ BH-Relation
We find a M_ BH/M_ bulge ratio of >0.270.110.08%
using the black hole mass of M_ BH87 <cit.>
and the stellar mass derived in SED as an upper limit to the bulge mass.
This ratio is consistent with those of other intermediate-z radio-loud AGNs <cit.>
but is higher than those of nearby AGNs <cit.>.
Our results therefore support
the emerging picture that quasars
grow faster and/or earlier than their host galaxies at higher redshifts <cit.>.
The elevated M_ BH/M_ bulge ratio of RXJ1131 compared to local AGNs
suggests that the bulk of the black hole mass of RXJ1131 is largely in place while its stellar bulge is still assembling.
§ SUMMARY AND CONCLUSIONS
We present PdBI and CARMA observations towards the
quadruply-imaged quasar RXJ1131 at z_ CO0.654, making this the first
resolved CO study at intermediate redshift.
Using the CO line intensities, we find a brightness temperature ratio of r_320.78 ± 0.37
between the and lines,
consistent with thermalized excitation but also with the lower excitation seen in normal star-forming disks.
We also detect marginally resolved
2 mm continuum emission underlying the line
and resolved radio continuum emission at 5 GHz in archival VLA data
in both the foreground lensing galaxy and RXJ1131.
Based on our lens modeling analysis of different velocity channels,
we find a secondary CO-emitting source near RXJ1131 whose spatial position
is consistent with those of an optically faint companion reported in previous optical studies
.
The magnification factor inferred for the CO emission in RXJ1131 is found to
vary from μ_ L3 to ∼9 across channels. This is indicative of an extended molecular gas
distribution in the host galaxy of RXJ1131, where the different kinematic components
of the gas are magnified inhomogeneously, similar to what was found for the z>4 quasar
PSS J2322+1944 <cit.>.
Upon correcting for lensing magnification and subtracting a contribution from the companion,
we find an intrinsically symmetric double-horned line profile for RXJ1131.
This together with a symmetric source-plane velocity gradient argues for a rotating disk in RXJ1131, in good agreement with previous findings . Physical quantities derived for RXJ1131 and the companion throughout this paper are summarized in prop.
Based on the lensing-corrected line intensities,
we find an intrinsic gas mass of M_ gas = (1.380.33) 10 for RXJ1131
and (1.920.09) 9 for the companion,
corresponding to a gas mass ratio of ∼7:1.
Using the source-plane size of R6 kpc, we find a dynamical mass of M_ dyn810 for RXJ1131.
The dynamical gas mass fraction of f_ gasM_ gas/M_ dyn18% and baryonic gas mass fraction of
f_molM_ gas/(M_ gas + M_*)34% are consistent with the trend of decreasing molecular gas
content since z2 <cit.>
which has been suggested as the cause for the decline in sSFR and cosmic history towards z0 <cit.>.
The CO-based dynamical mass ratio of ∼24:1
between RXJ1131 and the companion, and a gas mass ratio of ∼7:1
suggest that the system is a gas-rich, “wet” minor merger.
Fitting dust SED models to the IR-to-mm photometry, we derive
a lensing-corrected dust mass of M_ dust38 ,
an infrared luminosity of ∼ 1.512 (5.5/μ_ L) ,
and a far-IR luminosity that corresponds to a SFR_ FIR120 .
These physical properties suggest that the merger system is dusty in nature with on-going activity occuring
at a rate comparable to local ULIRGs/mergers and high-z massive disk galaxies <cit.>.
We also derive a stellar mass of M_*310 by fitting SED models to the
rest-frame UV-to-mm photometry, which have been corrected for their respective magnification factors before performing the fit to account for differential lensing effect.
The source-plane distribution of the gas and stellar populations of different ages
indicates that the CO gas is of similar spatial extent as the old and long-lasting stellar populations,
whereas regions of recent may be embedded within the molecular gas reservoir as a result of
gas accumulation driven by interactions with the companion.
Based on dynamical mass constraints, we cannot rule out the possibility that the
compact in the host galaxy
may be heavily dust-obscured.
Hence, the true extent of recent may be as extended as the molecular gas
reservoir.
While properties such as CO linewidth, SFR, and gas mass found in RXJ1131
are consistent with those of local ULIRGs and high-z starburst galaxies,
its SFE is comparable to those of nearby and high-z disk galaxies rather than
systems. This is in good agreement with its disk-like kinematic signatures and its extended molecular gas distribution.
We find a specific rate (sSFR4 Gyr) that is ≲1.5σ higher than those of “main sequence” galaxies.
The slight elevation in sSFR over the main sequence suggests that
the on-going star formation activity in RXJ1131 could be enhanced by interactions with the companion.
Recent simulations have illustrated that the disk component of a gas-rich
progenitor galaxy with low SFE can be
retained upon merging since the efficiency at removing angular momentum of the gas via
gravitational torques provided by stellar components is reduced in such a system <cit.>.
As such, the extended gas disk of RXJ1131 together with its low SFE may indicate
that the in RXJ1131 could form a
larger stellar bulge in the remnant disk galaxy upon coalescing.
This picture is in agreement with the one based on the relation,
where we find an elevated ratio of >0.270.110.08% for
RXJ1131 compared to the local value.
This suggests that the stellar bulge of RXJ1131 is still assembling in order to evolve onto the local relation.
We find that the redshift inferred from the NLR lines reported in previous studies are consistent with the systemic redshift as measured
from the CO line,
but that the BLR lines are redshifted by ∼715 .
We raise several plausible scenarios that may explain the observed velocity offset, outflow/inflow of gas in the BLR, kinematics of the accretion disk, geometric effects,
microlensing, and a recoiling black hole from merger event.
The latter scenario might also explain the high black hole spin parameter of
a0.870.080.15 reported by <cit.>, but
further evidence is needed to confirm or rule out this scenario.
Theoretical studies have suggested that negative feedback from an AGN may remove a
large fraction of the molecular gas from its host galaxy, thereby quenching its star formation .
In this study, we find that the star formation efficiency and specific SFR of RXJ1131 are comparable to those of
z1-1.5 disk galaxies, which are not known to host quasars, and that its
molecular gas mass fraction is consistent with the observed cosmic decline for star-forming
galaxies since z2-3.
Hence, we find no evidence of negative AGN feedback on the cold molecular gas fraction
and on the star formation activity in RXJ1131.
Future observations at higher resolution will allow us to better constrain the molecular gas kinematics and dynamics of
RXJ1131 to investigate any potential interplay with the quasar on smaller scales.
More broadly, systematic studies of the correlations between the molecular gas fraction, stellar mass, and AGN luminosity
at different redshifts
will enable us to better understand the relative importance of AGN feedback and of the
evolution in the molecular gas mass fraction on the decline of star formation history and black hole accretion history.
We thank the referee for providing detailed and constructive comments that helped to improve the clarity of this manuscript.
DR and RP acknowledge support from the National Science Foundation
under grant number AST-1614213 to Cornell University. RP acknowledges
support through award SOSPA3-008 from the NRAO. DR acknowledges the
hospitality at the Aspen Center for Physics and the Kavli Institute
for Theoretical Physics during part of the writing of this manuscript.
This work is based on observations carried out under project number S14BX
with the IRAM NOEMA Interferometer. IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain).
Support for CARMA construction was derived from the Gordon and Betty Moore
Foundation, the Kenneth T. and Eileen L. Norris Foundation, the James S.
McDonnell Foundation, the Associates of the California Institute of
Technology, the University of Chicago, the states of Illinois, California, and
Maryland, and the National Science Foundation. Ongoing CARMA development and
operations are supported by the National Science Foundation under a
cooperative agreement and by the CARMA consortium universities.
The National Radio Astronomy Observatory is a facility of the National Science
Foundation operated under cooperative agreement by Associated
Universities, Inc.
This research made use of data obtained with Herschel, an ESA space
observatory with science instruments provided by European-led Principal
Investigator consortia and with important participation from NASA.
This research has made use of NASA's Astrophysics Data System Bibliographic
Services.
This work is based in part on observations
made with the NASA/ESA Hubble Space Telescope, and obtained from the Hubble
Legacy Archive, which is a collaboration between the Space Telescope Science
Institute (STScI/NASA), the Space Telescope European Coordinating Facility
(ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA).
This work is based
in part on observations made with the ,
which is operated by the Jet Propulsion Laboratory, California Institute of
Technology under a contract with NASA.
This publication made use of data products from the Wide-field Infrared
Survey Explorer, which is a joint project of the University of California, Los
Angeles, and the Jet Propulsion Laboratory/California Institute of Technology,
funded by the National Aeronautics and Space Administration.
This publication made use of data products from the Two Micron All Sky
Survey, which is a joint project of the University of Massachusetts and the
Infrared Processing and Analysis Center/California Institute of Technology,
funded by the National Aeronautics and Space Administration and the National
Science Foundation.
This research made use of the NASA/IPAC Extragalactic Database (NED) which
is operated by the Jet Propulsion Laboratory, California Institute of
Technology, under contract with the National Aeronautics and Space
Administration.
This research made use of Astropy, a community-developed core Python package for Astronomy <cit.>.
This research made use of APLpy, an open-source plotting package for Python hosted at <http://aplpy.github.com>.
Facilities: IRAM PdBI, CARMA, VLA, Herschel(SPIRE), WISE, IRAS, 2MASS, Spitzer(IRAC, MIPS), HST(ACS, NICMOS)
yahapj
|
http://arxiv.org/abs/1701.08067v1 | 20170127144016 | Design of Capacity Approaching Ensembles of LDPC Codes for Correlated Sources using EXIT Charts | [
"Mohamad Khas Mohamadi",
"Hamid Saeedi",
"Reza Asvadi"
] | cs.IT | [
"cs.IT",
"math.IT",
"94B35"
] |
Design of Capacity Approaching Ensembles of LDPC Codes for Correlated Sources using EXIT Charts
Mohamad Khas, Hamid Saeedi, Member, IEEE, and Reza Asvadi, Member, IEEE
M. Khas and H. Saeedi are with the Department of Electrical
Engineering, Tarbiat Modares University, Tehran, Iran (e-mail:
hsaeedi@ieee.org).
R. Asvadi is with the Faculty of Electrical Engineering, Shahid Beheshti University (SBU),
Tehran, Iran. (e-mail: r_asvadi@sbu.ac.ir).
December 30, 2023
==========================================================================================================================================================================================================================================================================================================================================================================
This paper is concerned with the design of capacity approaching ensembles of Low-Densiy Parity-Check (LDPC) codes for correlated sources. We consider correlated binary sources where the data is encoded independently at each source through a systematic LDPC encoder and sent over two independent channels. At the receiver, a iterative joint decoder consisting of two component LDPC decoders is considered where the encoded bits at the output of each component decoder are used at the other decoder as the a priori information. We first provide asymptotic performance analysis using the concept of extrinsic information transfer (EXIT) charts. Compared to the conventional EXIT charts devised to analyze LDPC codes for point to point communication, the proposed EXIT charts have been completely modified to able to accommodate the systematic nature of the codes as well as the iterative behavior between the two component decoders. Then the developed modified EXIT charts are deployed to design ensembles for different levels of correlation. Our results show that as the average degree of the designed ensembles grow, the thresholds corresponding to the designed ensembles approach the capacity. In particular, for ensembles with average degree of around 9, the gap to capacity is reduced to about 0.2dB. Finite block length performance evaluation is also provided for the designed ensembles to verify the asymptotic results.
§ INTRODUCTION
Source and channel encoding/decoding of correlated sources has been the subject of several studies <cit.>. Perhaps the most immediate example of correlated sources is in sensor networks in which each sensor measures the data, encode it to bits and transmits it to a central node for decoding <cit.>. The correlation of the encoded bits comes from the fact that in many cases, several sensors measure the same phenomenon. The closer the sensors are, the larger the degree of correlation will become in most cases.
On the other hand, due to energy limitation in sensors which is enforced to increase the sensor lifetime, it is essential that the data is transmitted with the lowest possible energy while maintaining the required bit error rate. Consequently, channel coding is usually deployed at each sensor prior to transmission. At the central node, the channel decoder should be applied to each block of data received from each sensor. Now if the received bit streams are correlated, it is natural to consider joint channel decoding to take advantage of such a correlation.
Low-density Parity-check (LDPC) codes <cit.> have been widely suggested to be deployed in sensor networks due to their remarkable performance and reasonable decoding complexity <cit.>. For point to point communications, sequences of capacity approaching ensembles over memoryless Gaussian channels have been proposed in <cit.> where for a given code rate, the threshold of the ensemble is numerically shown to approach the Shannon capacity as the average check node degree increases. For the binary erasure channels, capacity achieving sequences of ensembles have been designed and their thresholds have been analytically shown to achieve the capacity <cit.>.
For point to point LDPC codes, different tools and techniques have been deployed for ensemble design. The most well-known analysis tool is density evolution (DE) <cit.>. To reduce the design complexity, an important alternative tool known as Extrinsic Information Transfer (EXIT) chart was proposed in <cit.>, <cit.> based on the assumption that the exchanged messages between variable nodes (VN) and check nodes (CN) of the corresponding Tanner graph can be approximated by consistent[For a consistent Gaussian random, the variance of the distribution is twice its mean.] Gaussian random variables.
In this paper, we consider the problem of joint channel decoding of LDPC-encoded correlated binary sources. We consider a simplified model where two sources generate
correlated binary bit streams. Then the streams are fed to an LDPC encoder blockwise and sent through two independent additive white Gaussian noise (AWGN) channels as shown in Fig. <ref>. Then the streams of data are received blockwise by the central node and are fed-back to the joint LDPC decoder proposed in <cit.> to obtain the original bit streams. In this decoder, two types of iterations, namely, inner and outer iterations are deployed such that at each outer iteration, the output of one decoder is used as the a priori information of the other decoder while the inner iterations are performed similar to a conventional LDPC message passing decoder.
Our aim in this paper is to design capacity approaching ensembles of LDPC codes in which we show that the decoding threshold of the joint decoding of the designed ensembles tends to the capacity limit obtained in <cit.> as the average check node degree of the designed ensembles grow. We in fact obtain tables of degree distributions (similar to those of <cit.> proposed for the point to point scenario) for different levels of correlation in the source bits for the first time. The claimed capacity approaching thresholds are then verified by finite block length simulations.
To design the ensembles, we use the concept of EXIT charts. There are, however, important obstacles to apply the original EXIT chart scheme to our case that are addressed in this paper. First, as there are two decoders in place, the EXIT curve corresponding to the variable node of each decoder should be generated taking into account the a priori information from the other decoder. Second, as we consider systematic codes, the corresponding Tanner graph in our case has a two edge-type structure <cit.>, one corresponding to the message bits and one corresponding to the parity bits. Therefore, two types of variable node EXIT curve have to be considered and the corresponding degree distributions have different structure than the conventional non-systematic LDPC codes. We show that with reasonable average check node degree, we can get as close as 0.2dB of the Shannon limit for different amount of correlations.
The organization of the paper is as follows. In Section <ref>, basic concepts and notations related to the source and correlation model and the Shannon limit are given. In Section <ref>, we describe the algorithm for iterative joint channel decoding of correlated sources and propose the two edge-type structure for this model. In Section <ref>, we propose the modified EXIT charts for joint iterative LDPC decoding of correlated sources and analyse the performance of regular and irregular LDPC codes. The code design procedure is presented in Section <ref>. The simulation results and numerical examples are summarized in Section <ref>. Finally, Section <ref> draws the conclusion.
§ PRELIMINARIES
§.§ Source and Correlation Model
Consider the two binary memoryless sources (U_1,U_2) that generate binary sequences segmented in blocks of length K and denoted by u_1={u_1,1,u_1,2,…,u_1,K} and u_2={u_2,1,u_2,2,…,u_2,K}.
The bits within each sequence are assumed to be i.i.d with equal probability of being zero and one. <cit.>.
Let z=u_1⊕u_2 be the component-wise addition modulo-2 of the two sources output. The vector z=(z_1,…,z_k) implies correlation between the two sources and it is called correlation vector. We define the empirical correlation between these two sources as p =γ /K where γ is the number of zeros in z. Obviously, sequences with the empirical correlation values of p and 1-p have the same entropy value <cit.>. This correlation can be generated by simply passing tone of the sequences through a binary systematic channel (BSC) with transition probability 1-p to generate the sequence for the other source.
A joint channel decoding for the correlated sources is considered, where the sources are independently encoded by identical LDPC encoders, i.e., encoders have no communication. The encoders map a k bit vector corresponding to u_1 (u_2) to n_1 (n_2) bit vector of x_1 (x_2). The code rate of each encoder is then equal to RR_c_1=K/n_1.
and R_c_2=k/n_2. In what follows, we assume that the code rates are the same (symmetric system), i.e, R_c_1=R_c_2=R_c , or equivalently, n_1=n_2=n
<cit.>. Each source is encoded according to a systematic (n,K) LDPC code. Hence, the generated codeword c is the augmentation of information bit u and parity bit p vectors. The binary phase-shift keying (BPSK) modulation is used before sending a codeword c over the AWGN channel.
§.§ Theoretical limit
To be able to evaluate the performance of the joint-decoding, the Shannon-SW limit is considered <cit.>.
In our simulations, we employ the energy per generated source, denoted by E_so, which is related to the energy per information bit, denoted by E_b, and the energy per transmitted symbol, denoted by E_s, as follows <cit.>:
2E_so=H(U_1,U_2)E_b=(1/R_c_1+1/R_c_2)E_s,
where H(U_1,U_2) represents the joint entropy between the two correlated sources and E_s equals 1 in BPSK modulation. Since U_1 and U_2 are uniformly distributed binary sources, the binary entropy of each source is equal 1, i.e, H(U_1)=H(U_2)=1. Then, H(U_1|U_2)=H(U_2|U_1)=h_2(p), where h_2(p) is the entropy of the empirical correlation p and H(U_1,U_2)=H(U_1)+h_2(p). In the considered symmetric system, reliable transmission over a channel pair is possible as long as E_so/N_0 satisfies the Shannon-SW condition <cit.>
E_so/N_0>1/R_c(2^H(u_1,u_2)R_c-1),
where N_0 is the noise power spectral density.
§ SYSTEM MODEL
§.§ Two edge-Type LDPC Codes
In this section, a two edge-type LDPC code and its associated graph are presented. Consider the parity check matrix H_m× n of an LDPC code represented by a Tanner graph <cit.>, denoted by 𝒢=(𝒱,𝒞,ℰ), where 𝒱 and 𝒞 denote a set of n VN and m CN, respectively, and ℰ is a set of edges of the graph. According to 𝒢, we have an edge e={v_i,c_j}∈ℰ, where a VN v_i ∈𝒱 is connected to a CN c_j ∈𝒞 in 𝒢, if and only if, h_i,j=1 in H=[h_i,j].
Conventional LDPC codes are described asymptotically by coefficient pairs (λ,ρ) or degree distribution polynomials of the VN and CN as follows:
λ (x)=∑_i=2^D_vλ_i x^i-1, ρ (x)=∑_j=2^D_cρ_j x^j-1,
where D_v and D_c are the maximum VN and CN degrees. The coefficient λ_i (resp. ρ_i) is the fraction of edges that are connected to VNs (resp. CNs) of degree-i.
LDPC codes can be used in a systematic form, and hence its codeword comprises two disjoint source and parity parts. Accordingly, the VNs are divided into two sets: source nodes and parity nodes. Then, the edges connecting to source or parity nodes are called source edges, denoted by ℰ^s, and parity edges, denoted by ℰ^p, respectively. In this paper, we use a family of LDPC codes whose CNs are connected to source nodes via at least one edge. The LDPC codes with such a structure are called fully-source-involved LDPC (FSI-LDPC) codes.
Since a joint decoder employs two compound LDPC decoders, the associated Tanner graph of the joint decoder consists of three types of nodes. They are called source nodes denoted by 𝒱^s={v^s_1,v^s_2,…,v^s_n-m}, parity nodes denoted by 𝒱^p={v^p_1,v^p_2,…,v^p_m}, and CNs denoted by 𝒞 for each of LDPC decoders. Moreover, there are state nodes denoted by 𝒮 which connect the two iterative decoders to exchange extrinsic information. A schematic of the associated Tanner graph of the joint decoder is presented in Figure <ref>. In this figure, information of source nodes of each decoder are passed to the source nodes of the other decoder via state nodes. Furthermore, we use 𝒢=(𝒱^s,𝒱^p,𝒞,𝒮,ℰ^s, ℰ^p) to denote a two edge-type graph of a joint LDPC decoder.
We follow the notations defined in <cit.> in this paper. Let n^s_i and n^p_i denote the number of source and parity nodes of degree-i, respectively. The total number of source and parity nodes are also given by n^s=∑_i=2^D_vn^s_i and n^p=∑_i=2^D_vn^p_i, respectively. Let n_i=n^s_i+n^p_i denote the number of VNs of degree-i. Hence, the number of VNs n equals ∑_i=2^D_vn_i. Let m_j,k be the number of CNs of degree-j, k edges of which are connected to the source nodes and (j-k) edges of which are connected to the parity nodes. Thus, the number of CNs of degree-j, denoted by m_j, is equal to ∑_k=1^j-1m_j,k. Similarly, the total number of CNs m is determined by ∑_j=2^D_c m_j, and the total number of edges on a two edge-type graph is determined by
E=E^s+E^p=n∑_i=2^D_vλ_i/i=m∑_j=2^D_cρ_j/j.
In addition to (λ,ρ), we need to introduce additional coefficient pairs (α,β) to asymptotically describe a two edge-type graph where α_i=n^s_i/n_i is in fact the fraction source nodes of degree-i out of all degree-i VNs. Similarly, β_j,k=m_j,k/m_j, where ∑_k=1^j-1β_j,k=1. Now, the source and parity variable degree distribution polynomials are, respectively, defined as follows:
λ^s (x)=∑_i=2^D_vλ_i^s x^i-1, λ^p (y)=∑_i=2^D_vλ_i^p y^i-1,
where
λ_i^s=n_i^s· iE^s=n_i^sn_i n_i· iE EE^s = α_i λ_i/∑_j=2^D_vα_j λ_j,
λ_i^p=n_i^p· iE^p=n_i-n_i^sn_i n_i· iE EE^p=(1-α_i) λ_i/∑_j=2^D_v (1 - α_j) λ_j.
The source and parity side CN degree distribution polynomials are, respectively, defined as follows:
ρ^s (x,y)=∑_j=2^D_c∑_k=1^j-1ρ_j,k^s x^k-1 y^j-k,
ρ^p (x,y)=∑_j=2^D_c∑_k=1^j-1ρ_j,k^p x^k y^j-k-1,
where
ρ_j,k^s = m_j,k· kE^s = ρ_jj·β_j,k k/∑_i=2^D_vα_i λ_i,
ρ_j,k^p = m_j,k· (j-k)E^p = ρ_jj·β_j,k (j-k)/∑_i=2^D_v (1-α_i) λ_i.
There are two variables denoted by “x" and “y" in ρ^s (x,y) and ρ^p (x,y), which indicate two types of edges incident to each CN. The inner summation of ρ_j,k^s and ρ_j,k^p over k, indicate the fraction of j-th degree CNs which are connected to the the source and parity VNs, respectively.
The code rate must be the same in two edge-type and its corresponding single edge-type graph. Furthermore, the number of edges emerging from each type of variable and CNs must be the same too. Hence,the following conditions <cit.> must be satisfied:
∑_i=2^D_vα_i λ_i=∑_j=2^D_cρ_jj∑_k=1^j-1β_j,kk,
∑_i=2^D_vα_i λ_ii=R ∑_i=2^D_vλ_ii.
Note that a symmetric two edge-type graph corresponding to a joint decoder can be described by (λ, ρ, α, β).
Moreover, an ensemble corresponding to (λ, ρ, α, β) can be obtained from a single edge-type ensemble (λ, ρ) by partitioning the VNs and their associated edges connected to the CNs according to (α,β) distribution.
§.§ Iterative Joint LDPC Decoding
We assume that the data block corresponding to sources U_1 and U_2, i.e, u_1 and u_2 are encoded through the systematic LDPC encoders respectively into x_1 and x_2. The encoded bits are transmitted over two independent AWGN channels. At the receiver, we receive vectors r_1=x_1+n_1 and r_2=x_2+n_2,
where n_i ∼𝒩(0, σ_n^2), for i=1,2,
Let û_i, i=1,2, denote decoded bits of i-th decoder. The joint receiver employs the empirical estimate of the correlation parameter to benefit from the intra-sources correlation. The estimate of correlation vector, denoted by ẑ, is calculated by ẑ=û_1⊕û_2.
The joint decoder is composed of two parallel LDPC decoders working based on the sum-product (SP) algorithm <cit.> where it also accepts an estimate of the transmitted bits from the other decoder as a priori information. The structure of iterative joint decoder is shown in Fig. <ref>.
There are tow types of iterations, which are called global and local iterations indicated by superscripts g and l, respectively. The estimate of correlation is updated during each global iteration. Then, this update estimation is passed on to the both decoders for being used as a side information. Next, each decoder performs the SP algorithm with a specified maximum number of local iterations. Note that log-likelihood ratio (LLR) of the side information are only added to the systematic bit nodes of each decoder, while these LLRs are first set to zero for the both decoders.
Consider the g-th global and l-th local iterations of the joint decoder. In each local iteration, the encoded bits coming from the channel are transformed to a posteriori LLRs, denoted by L_ch(r), and fed into the VNs as follows:
L_ch(r_i,v)=log((x_i,v=1|r_i,v)(x_i,j=0|r_i,v))=2σ^2r_i,v,
where i ∈{1,2} indicates the i-th channel, and v ∈{1,2,…,n} is the number of VN. For simplicity, we drop the channels index in the sequel unless an ambiguity arises. Let L_v,c^(l) and L_c,v^(l) denote the LLR of messages emanating from a VN v to a CN c and, vice versa, from the CN to the VN at local iteration l, respectively. Since the side information are only added to the information bit nodes, LLR update equations from these nodes to the CNs are given by:
L_v,c^(l)=L_ch(r_v)+∑_c'≠ cL_c',v^(l-1) + L_s^(g-1)(û),
where v∈𝒱^s, and L_s^(g)(û) denotes LLR of the side information associated to the estimated correlation at the global iteration g. It is worth noting that the summation is only applied on all connected CNs to the VN v, except the CN c. The messages forwarded from parity VNs to the CNs are calculated in a same way as for the standard SP decoder <cit.>
L_v,c^(l)=L_ch(r_v)+∑_c'≠ cL_c',v^(l-1),
where v∈𝒱^p. Furthermore, the messages L_c,v^(l) are updated in each local iteration in the same equation as represented in the standard SP decoder <cit.>.
In the g-th global iteration, the correlation vector ẑ is estimated by ẑ^(g)=û_1^(g)⊕û_2^(g), where û_1^(g) and û_2^(g) are, respectively, the hard estimates of the source bits u_1 and u_2 in the terminated local iteration l_t in each associated decoder. The LLR L^(g)(ẑ) at each global iteration is obtained using the proposed technique in <cit.>, as follows:
L^(g)(ẑ_v)=(1-2ẑ_v^(g))log_2 (k-W_HW_H),
where k and W_H are, respectively, the source block size and Hamming weight of the correlation vector ẑ^(g)=(ẑ_1^(g),…,ẑ_k^(g)).
Finally, the LLR of side information of the source bits that are input to the first and the second decoders, at the next global iteration, are calculated by <cit.>:
L_s^(g)(û_1,v)=(L^(g)(ẑ_v))·(L^(g)(û_1,v))
· 2 (tanh(|L^(g)(ẑ_v)|/2)tanh(|L^(g)(û_2,v)|/2)),
where v∈𝒱^s, and similarly
L_s^(g)(û_2,v)=(L^(g)(ẑ_v))·(L^(g)(û_2,v))
· 2 (tanh(|L^(g)(ẑ_v)|/2)tanh(|L^(g)(û_1,v)|/2)),
where
L^(g)(û_1,v)=L_ch(r_1,v)+∑_c'L_c',v^(l_t-1),
L^(g)(û_2,v)=L_ch(r_2,v)+∑_c'L_c',v^(l_t-1),
and v ∈𝒱^s. According to the Turbo principle, side information, also called extrinsic information, added to each SP decoder should not include the same decoder's information. Therefore, side information from L(û_1) and L(û_2) are removed from the two above equations, but they have to be added when information source bits are estimated at the end of the local iteration to calculate a posteriori information.
§ EXIT CHART ANALYSIS
In EXIT chart analysis tool firstly proposed in <cit.> to design ensembles of conventional LDPC codes, evolution of the Mutual Information (MI) between a given transmitted binary bit at source and several LLR's within the decoder is traced. In a point to point (P2P) LDPC decoder, variable node EXIT curve displays the MI between the transmitted bit and the LLR at the output of the VN (I_EV) versus the MI between the transmitted bit and the LLR at the input of the VN (I_AV). Similarly, check node EXIT curve displays the MI between the transmitted bit and the LLR at the output of the CN (I_EC) versus the MI between the transmitted bit and the LLR at the input of the CN (I_AC).
To obtain these curves, it is usually assumed that the density at the input of each decoding unit (VN and CN in this case) are consistent Gaussian random variables. To track the iterative exchange of messages between VN and CN, we set I_AV at iteration l equal to I_EC of iteration l-1. Similarly, we set I_AC at iteration l equal to I_EV of iteration l. Alternatively, one can plot the VN curve and inverse of the CN curve ad following the trajectories. This is referred to as EXIT chart. As far as the EXIT chart analysis of the proposed system model is concerned, we have to deal with a modified chart that can incorporate both inner and outer iterations as well as the fact the considered graph is two edge-type in contrast to the conventional case.
§.§ Mutual Information between the Transmitted Bit and the LLR of the Received Data
Let X_1 be a BPSK modulated transmitted bit in the first source and X_2 be the BPSK modulated transmitted bit at the second source on the same time instant. Since the sources are correlated, we have P(X_1=X_2)=p. Now let A_1 and A_2 be the LLRs corresponding to the received information at the input of the first and second decoders, respectively. It has been already established that A_i are consistent Gaussian random variables with variance σ^2_A=4/σ^2. It has been shown that <cit.> in this case we have I(X_c,A_d)=J(σ^2_A), c=d, c=1, 2 where
J(σ_A)=
1-1/√(2π)σ_A∫_-∞^∞log_2(1+e^-l) exp(-(l-σ_A^2/2)^2/2σ_A^2)dl.
Moreover, for c ≠ d, we have I(A_c;X_d) =J̃(σ_A,p) <cit.> where
J̃(σ_A,p)=1-
1/√(2π)σ_A∫_-∞^∞log_2(1+e^-l/p+p̅e^-l)
(pe^ -(l-σ_A^2/2)^2/2σ_A^2 + p̅e^ -(l+σ_A^2/2)^2/2σ_A^2 ) dl,
and p̅=1-p.
If the correlation parameter is assumed to 100%, i.e., p = 1, Eq. (<ref>) is reduced to the well-known equation (<ref>).
Likely, if the correlation parameter is set to be 50%, i.e., p=0.5, then MI of the extrinsic messages passed to the other decoder would be equal to zero which is intuitively reasonable.
Eq. (<ref>) will be used in the next subsection to incorporate the correlation between X_1 and X_2 in the corresponding EXIT chart.
§.§ Modified EXIT Chart
Consider one of the decoders at the receiver. At outer iterating g, the variable node EXIT curve of degree i belonging to source (systematic) bits at inner iteration l, I_EV^s(l)(i), can be obtained based on the data from channel, extrinsic information from the check node at iteration l-1 and, extrinsic information coming from the other decoder at outer iteration g-1. The latter term is referred to as the helping information and denoted by I_h^g. I_h^g is in fact a function of I_EV^s(l_t) of the other decoder at outer iteration g-1 where l_t is the last inner iteration. At first outer iteration, this is set to zero.
Moreover, at outer iterating g, the variable node EXIT curve belonging to parity bits at inner iteration l, I_EV^p(l)(i), can be obtained based on the data from channel and extrinsic information from the check node at iteration l-1. Similar statements can be made for the check node exits curves.
Given the above explanations and using (<ref>), we obtain I_EV^s(l)(i) as
I _EV^s(l)(i) =
J ( √(σ_ch^2 + (i-1)[J^-1(I_EC^s(l-1))]^2 + [J^-1(I_h^(g-1))]^2 )),
where I_EC^s(l-1) denotes the mutual information from the CNs to the source nodes at g-th global iteration.
So for an irregular variable node the EXIT curve is obtained as:
I_EV^s(l) = ∑_i=2^D_vλ_i^s I_EV^s(l)(i) ,
To obtain I_h^(g-1) we act as follows:
We first obtain I_h^(g-1)(i), the I_h^(g-1) corresponding to degree-i VNs. Note that it is in fact equal to I(X_c,A_d) for c≠ d. Therefore we have
I_h^(g)(i) = J̃(σ_h,p), where σ_h = √(σ_ch+i[J^-1(I_EC^s(l_t))]^2) ,
where I_EC^s(l_t) is the extrinsic information of at the output of the check nodes of the other decoder at last iteration.
We finally obtain:
I_h^(g)=∑_i=2^D_vλ_i^s I_h^g(i) ,
To obtain I_EV^p(l)(i), using (<ref>) we write
I_EV^p(l) (i) = J ( √(σ_ch^2 + (i-1)[J^-1(I_EC^p(l-1))]^2 )),
where I_EC^p(l-1) denote the mutual information from the CNs to the parity nodes at g-th global iteration.
So for an irregular variable node the EXIT curve is obtained as:
I_EV^p(l) = ∑_i=2^D_vλ_i^p I_EV^p(l)(i) ,
where I_EC^s(l), and I_EC^p(l) denote MI from the CNs to the source and the parity nodes at g-th global iteration, respectively. Let γ_s and γ_p be defined as ratio of source edges to total edges and parity edges to total edges respectively that can be written as
γ_s=E^s/E =∑_i=2^D_vλ_i α_i, γ_p= E^p/E=∑_i=2^D_vλ_i (1-α_i).
Using (<ref>) and (<ref>), the VN EXIT curve at outer iteration g is finally obtained as
I_EV = γ_s I_EV^s + γ_p I_EV^p.
To obtain EXIT curve of the CNs,
we first obtain I_EC^s(l) and I_EC^p(l) as follows. Consider degree j CNs which are connected through k edges the source nodes and (j-k) edges to the parity nodes. The corresponding MI to source and parity nodes are, respectively, given by:
I_EC^s(l)(j,k)=1 -
J (√( (k-1) [J^-1(1-I_EV^s(l))]^2 + (j-k) [J^-1(1-I_EV^r(l))]^2 )),
and
I_EC^p(l)(j,k)=1 -
J ( √( k [J^-1(1-I_EV^s(l))]^2 + (j-k-1) [J^-1(1-I_EV^r(l))]^2 )),
Consequently we have:
I_EC^s(l)=∑_j=2^D_c∑_k=1^j-1ρ_j,k^s I_EC^s(l)(j,k) ,
and
I_EC^p(l)=∑_j=2^D_c∑_k=1^j-1ρ_j,k^p I_EC^p(l)(j,k) .
Therefore, the overall EXIT curve of the CNs at is obtained as follows:
I_EC=γ_s I_EC^s + γ_p I_EC^p.
From (<ref>) and (<ref>), we observe that similar to the P2P case, the CN curve is only concerned with the degree of CNs and dose not change with SNR of the channel. Moreover, evaluation of (<ref>) shows that the CN curve independent from g which also makes sense intuitively.
§.§ Modified EXIT chart example
To get a better insight out of the above ugly equations, the modified EXIT chart corresponding to a joint decoder with a (3,6)-regular LDPC code is depicted in Fig. <ref> for the correlated sources when the correlation parameter p is set to 0.95 where the maximum inner iterations are set to be 50. Inner and outer iterations have also been plotted. As can be seen, in the absence of the helping information at the first outer iteration, each decoder performs inner iterations but it is stuck in the intersection of the check node curve and the blue variable node curve. Then, at the second outer iteration, with the help of extrinsic formation of the other decoder, the inner iterations continue and the MI continues to grow until it is stuck again at the intersection of check node curve and the red variable node curve. In the 3rd outer iteration, the MI jumps to continue its path toward 1 on the yellow variable node curve.
§ CODE DESIGN FOR THE JOINT DECODER
In this section, our aim is designing VN degree distribution λ(x) for an ensemble of LDPC codes with a given CN degree distribution ρ(x) so as to minimize the gap between code rate R and the Shannon-SW limit corresponding to the channel parameter σ^2. In general, the design of irregular LDPC codes using EXIT chart analysis is based on a curve-fitting method including EXIT curves of the VNs and the CNs, see, e.g., <cit.>, <cit.>. To make the design procedure simpler, it is often assumed that the ensemble codes have regular degree distribution of CNs <cit.>, <cit.>.
EXIT function of a CN output (or a VN input in the previous local iteration) and EXIT function of the VN output (or the CN input in the same local iteration), denoted by I_EC^(l)(I_EV^(l)) and I_EV^(l)(I_EC^(l-1),σ_ch,p) can be found according to our discussion in Section <ref>.
In order to utilize EXIT chart in designing of LDPC codes for correlated sources, it should be found two edge-type coefficients α and β in addition to λ(x).
By applying a linear programming method, we first design a degree distribution pair (λ, ρ) and its corresponding coefficient pair (α, β). Thus, the initial value of (α, β) is realized and then the {λ_d_v, d_v∈ (2,…,D_v) } is designed again for the check-regular ensemble where the maximum VN degree D_v is predefined. For an ensemble given {λ_d_v}, EXIT function of the VN output can be represented by
I_EV=∑_d_v=2^D_vλ_d_v I_EV(d_v),
where I_cv(d_v) is EXIT curve of degree d_v VN that is obtained as described in Section <ref>.
As formerly mentioned, EXIT curve of a CN is the same as that in the P2P case, as follows
I_E,C(I_A,C,d_c)=1 - J( √((d_c-1)[J^-1(1-I_A,C)]^2)),
where I_A,C is easily obtained associated to I_E,C according to the inverse of J(.) function.
The code rate optimization problem is formulated as follows:
maximize: R=1-∑_d_c=2^D_cρ_d_c/d_c/∑_d_v=2^D_vλ_d_v/d_v ,
subject to: 1. ∑_d_v=2^D_vλ_d_v = 1 ,
2. λ_d_v≥ 0 ,
3. I_EV > I_A,C(I_E,C) ,
for I_E,C∈ [0 1] .
The third constraint is the zero-error constraint or is equivalent to that the output MI of the VN in each local iteration greater than the previous local iteration (i.e, I_EV^(l)>I_EV^(l-1)). Also maximizing R is equivalent to maximize ∑_d_v=2^D_vλ_d_v/d_v. When I_EV(d_v) is given for each degree so I_EV is linear in {λ_d_v, d_v∈ (2,…,d_v,max(D_v)) }.
Therefore after finding λ(x) and the given CN degree distribution ρ(x), we can obtain the optimum (α,β) by using Conditions (<ref>), (<ref>) and a stability condition that is given under Gaussian assumption and using MI evolution as <cit.>
α_2 λ_2e^(-M^2/8) + (1-α_2) λ_2 < e^1/2σ_n^2/∑_j=2^D_cρ_j (j-1) ,
where M = J^-1( J̃(σ_max,p) ) and σ_max=J^-1(1). Note that the stability condition depends on the channel parameters and J̃(.) function.
The result of our search for the BIAWGN channel is summarized in Table <ref> and Table <ref>. Table <ref> and Table <ref> contains those degree distribution pairs of rate one-half with coefficient pairs (α,β) and various correlation parameter. We also consider an upper bound from (<ref>) which is useful to measure the performance of our designed ensembles.
§ FINITE-LENGTH RESULTS
In this section, we illustrate the performance of finite-length LDPC codes constructed from optimized irregular degree distributions for iterative joint channel decoder at different rates. The finite-length construction is performed with a modified progressive edge growth (PEG) method that has very low error-floor performance <cit.>.
We simulate the proposed scheme for different values of the correlation parameters p and assume that both of transmit nodes use the same degree distribution of LDPC codes and also each of the channel parameters are the same. For the given p and R, the theoretical bound of the E_so/N_0 can be calculated from (<ref>) for error-floor recovery.
In the following, we represent sample simulation results associated with several designed LDPC codes for different values of the correlation parameters and the code rates in section <ref>. We show the performance of the designed codes in section <ref> are bater than the obtained results in <cit.> by using of the simulation results with finite-length. We consider the ensemble codes with rate R=0.5 for several the correlation parameter p. The blocks length are selected to be 20000 and the maximum number of local and global iteration are set to 100 and 10 respectively. We provide our simulation results by using the irregular LDPC codes according to Table <ref> and <ref>.
Example 1: From Table <ref>, we consider the designed code for R=0.5 and p=0.9 with following VN and CN degree distribution and its corresponding the pair (α,β)
λ(x)= 0.23559 x + 0.39783 x^3 + 0.14198 x^6 +
0.00148 x^13 + 0.22312 x^14,
ρ(x)= x^6.
Example 1: From Table <ref>, we consider the designed code for R=0.5 and p=0.95 with following VN and CN degree distribution and its corresponding the pair (α,β)
λ(x)= 0.2518 x + 0.38081 x^3 + 0.10944 x^6 +
0.00121 x^13 + 0.25674 x^14,
ρ(x)= x^6.
For the correlated sources, the BER values of three codes as a function of E_so/N_0 (SNR) and p are reported in Fig. <ref>. Note that the curves labeled “glob.it.0” show the performance of the LDPC without using the correlation information that are equal to the performance of point to point.
Table <ref> shows, for various values of the correlation parameter p with the constant rate R=0.5, the corresponding joint entropy H(u_1,u_2) of the correlated sources, the theoretical limit for E_so/N_0 in (<ref>), [E_so/N_0]_lim, the obtained threshold from EXIT chart, [E_so/N_0]_th, and the value of E_so/N_0 for which the proposed iterative joint decoder (at target
BER=10^-5).
§ CONCLUSIONS
IEEEtran
|
http://arxiv.org/abs/1701.07831v3 | 20170126190003 | Mapping the Milky Way with LAMOST I: Method and overview | [
"Chao Liu",
"Yan Xu",
"Jun-Chen Wan",
"Hai-Feng Wang",
"Jeffrey L. Carlin",
"Li-Cai Deng",
"Heidi Jo Newberg",
"Zihuang Cao",
"Yonghui Hou",
"Yuefei Wang",
"Yong Zhang"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Vol.0 (200x) No.0, 000–000
Key Lab of Optical Astronomy, National Astronomical Observatories, Chinese Academy of Sciences,
Beijing 100012, China; liuchao@nao.cas.cn
University of the Chinese Academy of Sciences, Beijing, 100049, China
LSST, 933 North Cherry Avenue, Tucson, AZ 85721, USA
Department of Physics, Applied Physics and Astronomy, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
Nanjing Institute of Astronomical Optics & Technology, National Astronomical Observatories, Chinese Academy of Sciences, Nanjing 210042, China
We present a statistical method to derive the stellar density profiles of the Milky Way from spectroscopic survey data, taking into account selection effects. We assume that the selection function of the spectroscopic survey is based on photometric colors and magnitudes and possibly altered during observations and data reductions. Then the underlying selection function for a line-of-sight can be well recovered by comparing the distribution of the spectroscopic stars in a color-magnitude plane with that of the photometric dataset. Subsequently, the stellar density profile along a line-of-sight can be derived from the spectroscopically measured stellar density profile multiplied by the selection function. The method is validated using Galaxia mock data with two different selection functions. We demonstrate that the derived stellar density profiles well reconstruct the true ones not only for the full targets, but also for the sub-populations selected from the full dataset. Finally, the method is applied to map the density profiles for the Galactic disk and halo, respectively, using the LAMOST RGB stars. The Galactic disk extends to about R=19 kpc, where the disk still contributes about 10% to the total stellar surface density. Beyond this radius, the disk smoothly transitions to the halo without any truncation, bending, or broken. Moreover, no over-density corresponding to the Monoceros ring is found in the Galactic anti-center direction. The disk shows moderate north–south asymmetry at radii larger than 12 kpc. On the other hand, the R–Z tomographic map directly shows that the stellar halo is substantially oblate within a Galactocentric radius of 20 kpc and gradually becomes nearly spherical beyond 30 kpc.
Liu et al.
Mapping the Milky Way I
Mapping the Milky Way with LAMOST I: Method and overview
Chao Liu
1
Yan Xu
1
Jun-Chen Wan
1,2
Hai-Feng Wang
1,2
Jeffrey L. Carlin
3
Li-Cai Deng
1
Heidi Jo Newberg
4
Zihuang Cao
1
Yonghui Hou
5
Yuefei Wang
5
Yong Zhang
5
Received 2017 month day; accepted 2017 month day
=============================================================================================================================================================================================================================================================================
§ INTRODUCTION
The stellar density profile is of importance in unveiling the nature of the Galaxy. Either the photometric survey data, which are supposed to be complete to some extent, or the spectroscopic survey data, which can provide more accurate stellar parameters but obviously do not completely cover the sky due to the lower efficiency of the sampling, can be used to count the stars in three-dimensional space.
Although star count modeling has been studied for a long time, not until recently does it give reliable measurements of the Galactic disks and halo based on modern surveys (e.g., Chen et al. <cit.>, Jurić et al. <cit.>, Watkins et al. <cit.>, Bovy et al. <cit.>, Xue et al. <cit.>; also see the review by Bland-Hawthorn & Gerhard <cit.>). Jurić et al. (<cit.>) fitted the star counts of various spectral types of stars from the SDSS photometric survey data with two exponential disks and a power-law stellar halo. They found that the scale length of the thin disk is 2.6 kpc, substantially smaller than that of the thick disk, which is 3.6 kpc. However, Bovy et al. (<cit.>) showed that the scale lengths change from ∼4.5 kpc for the mono-abundance populations with small scale heights to ∼2 kpc for those with large scale heights. Comparison between the two results is non-trivial, since the former work identified the thick disk by geometry, while the latter one defined the thick disk based on the chemical abundances.
It is worthwhile to point out that although the disk profile is often empirically simplified as an exponential or hyperbolic secant form, it is composed of substantially asymmetric structures such as the bar and spiral arms. Furthermore, Lopéz-Corredoira et al. (<cit.>) unveiled that the disk is significantly flared and warped in the outskirts using red clump giant stars selected from the 2MASS catalog (Strutskie et al. <cit.>). Moreover, recent studies also demonstrated vertical asymmetric structures in the outer disk (Xu et al. <cit.>) as well as in the solar neighborhood (Widrow et al. <cit.>).
On the stellar halo, Watkins et al. (<cit.>) and Deason et al. (<cit.>) used RR Lyrae and Blue Horizontal Branch stars, both of which are accurate standard candles, to measure the shape of the stellar halo, respectively. They both found the halo density profile follows a broken power-law with constant axis ratios. Later on, Xue et al. (<cit.>) suggested that, instead of the broken power-law with constant axis ratio, a single power-law with a radially variable axis ratio can also well fit the density profile of the halo. It is also noted that Xu, Deng & Hu (<cit.>, <cit.>) found that the stellar halo is tri-axial from the star counts using photometric data.
The advantages of using multi-band photometric data in studies of the star counts are that 1) the sampling for the photometric stars is approximately complete within a limiting magnitude; 2) multiple colors can be used to select specific stellar populations and also determine the photometric parallax for the stars; and 3) the large number of the stellar samples can improve the statistical performance. However, there are a few disadvantages of photometric data, including: 1) it is hard to accurately select stellar populations based on chemical abundances or ages; 2) the dwarf/giant separation is difficult in most broad-band photometric data; 3) the distance estimation based on photometric data in principle suffers from larger uncertainty and potential systematic bias; and 4) binarity may affect the results of the star counts since unresolved binaries usually show slightly different colors than single stars.
In general, spectroscopic data can provide relatively precise stellar parameter estimates, which lead to better distance determinations. Moreover, the stellar parameters derived from spectroscopic data are helpful in better identifying a stellar population. Therefore, star counts based on spectroscopic data should provide a valuable complement to photometric studies. It can play even more important role in understanding the evolution of the Galaxy via the structures of different stellar populations. However, because it is difficult to sample large regions of sky spectroscopically to a completeness that can easily be achieved by a photometric survey, the stellar density measurements from a spectroscopic survey highly depend on correction of the selection function explicitly or implicitly induced in the survey.
Techniques to correct for the selection function have been discussed in many works. Liu & van de Ven (<cit.>) corrected the selection effects for SEGUE G-dwarf stars by comparing the stellar density in color-magnitude-distance grids between the spectroscopic and photometric survey data. A similar method has been used in Zhang et al. (<cit.>) for the measurement of the local dark matter density. Bovy et al. (<cit.>) proposed a Bayesian technique to model the stellar density profiles with analytic forms accounting for the selection effects. This technique was later applied to the APOGEE survey data (Bovy et al. <cit.>) and SEGUE K-giant stars (Xue et al. <cit.>). It is noted that Xia et al. (<cit.>) also used a similar technique with the LAMOST survey data.
Recently, the LAMOST survey (Cui et al. <cit.>, Zhao et al. <cit.>, Deng et al. <cit.>) has collected more than 5 million low resolution stellar spectra in the DR3 catalog. These stars cover much of the stellar halo to Galactocentric radius of 80–100 kpc (Liu et al. <cit.>) and cover the Galactic anti-center region to as far as 50 kpc. This provides a remarkably large number of distant stars with which to measure the shapes of the outer disk and halo of the Milky Way. In this work, we revise the approach of stellar density measurement from spectroscopic survey data originally proposed by Liu & van de Ven (<cit.>) and re-organize it in terms of Bayesian statistics. The post-observation selection function is taken into account during the determination of the stellar density. Meanwhile, we do not presume any analytic forms for the density profiles of neither the disks nor the halo. This allows us to better detect the potentially more complicated shapes of the structures, e.g., asymmetric features in the disk or the possibly radial variation of the axis ratio of the halo, in a non-parametric manner.
The paper is organized as follows. In Section <ref>, we propose the method to derive the stellar density profile along a given line-of-sight with the consideration of the selection function. In Section <ref>, we validate the method using mock data with various selection functions. In Section <ref>, we apply it to the LAMOST red giant branch stars and demonstrate the derived stellar density map for the disk and halo components. In section <ref>, we discuss the smoothing effect of the LAMOST plate due to its large field-of-view and how to deal with the selection function slightly tuned by the observations. Finally, we draw brief conclusions in the last section.
§ STELLAR DENSITY PROFILE ALONG A LINE-OF-SIGHT
Given a line-of-sight with Galactic coordinates (l, b), we assume that the photometric data is always complete within the limiting magnitude[The limiting magnitude of a photometric survey is usually defined by the magnitude at signal-to-noise of 5 or 10 σ.]. We also assume that the selection of the spectroscopic targets is only associated with the color-magnitude diagram. The selection function as a function of the color(s) and magnitude(s) could be complicated. In principle, observations and data reduction may lose some data and thus the final selection function is slightly tuned. However, such a change would not induce substantial selection bias in stellar metallicity, age or kinematics, which are critical for distinguishing various stellar populations. Therefore the color-magnitude based selection function slightly tuned by the observations and data processing would not induce systematics in stellar populations. It is convenient to assume that it is a continuous function so that it is approximately constant in a sufficiently small region around color index c and magnitude m.
§.§ Generality
The probability density distribution (PDF) p_ph(D|c,m,l,b) represents the probability to find a star at the distance D at given c, m, l, and b. The probability to find a star in a small volume (D, D+Δ D) is then written as
Pr([D,D+Δ D]|c,m,l,b)= ∫_D^D+Δ Dp_ph(D|c,m,l,b)dD
≐ p_ph(D|c,m,l,b)Δ D.
On the other hand, Pr([D, D+Δ D]|c,m,l,b) is also equivalent to the fraction of the stars, which can be calculated from the underlying stellar density profile, located within the small volume. Thus, it can also be written as
Pr([D,D+Δ D]|c,m,l,b)=ν_ph(D|c,m,l,b)Ω D^2Δ D∫_0^∞ν_ph(D|c,m,l,b)Ω D^2dD.
where ν_ph is the volume stellar density measured from the photometric stars (i.e., the complete dataset), Ω D^2Δ D is the volume element between D and D+Δ D, and Ω is the solid angle of the line-of-sight. Combining Eqs. (<ref>) with (<ref>), we have
p_ph(D|c,m,l,b)Δ D=ν_ph(D|c,m,l,b)Ω D^2Δ D∫_0^∞ν_ph(D|c,m,l,b)Ω D^2dD.
Similarly, for the spectroscopic data, Eq. (<ref>) becomes
p_sp(D|c,m,l,b)Δ D=ν_sp(D|c,m,l,b)Ω D^2Δ D∫_0^∞ν_sp(D|c,m,l,b)Ω D^2dD,
where ν_sp represents the stellar density measured from counting the spectroscopic survey stars only. Within the small region around c and m, the selection function is assumed to be flat with either the color index or magnitude. Thus, it does not change the probability to find a star in both the photometric and spectroscopic samples, i.e.,
p_ph(D|c,m,l,b)=p_sp(D|c,m,l,b).
Then, Eqs. (<ref>) and (<ref>) can be combined together via Eq. (<ref>):
ν_ph(D|c,m,l,b)=ν_sp(D|c,m,l,b)S^-1(c,m,l,b),
where
S(c,m,l,b)=∫_0^∞ν_sp(D|c,m,l,b)Ω D^2dD∫_0^∞ν_ph(D|c,m,l,b)Ω D^2dD
is the selection function at c and m along (l, b).
Then the stellar density profile for all photometric stars along (l, b) can be derived by integrating over c and m:
ν_ph(D|l,b)=∬ν_sp(D|c,m,l,b)S^-1(c,m,l,b)dcdm.
§.§ Consideration of a stellar population
Now consider a stellar sub-population C selected from spectroscopic data based on, for example, the metallicity, luminosity, or age. Following Bayes' theorem, the probability to find a star at D given C, c, m, l, and b is written as
p_ph(D|C,c,m,l,b)=p_ph(C|D,c,m,l,b)p_ph(D|c,m,l,b)p_ph(C|c,m,l,b).
If there is no special selection function for C members at given c and m along (l, b), then we would expect that p_ph(C|D, c, m, l, b), i.e. the probability about C at given D, c, m, l, and b, should be same as that for the spectroscopic data, p_sp(C|D,c,m,l,b). Subsequently, by integrating over D, we obtain that p_ph(C|c,m,l,b)=p_sp(C|c,m,l,b). Finally, applying Eq. (<ref>) to this equation, all the terms of p_ph in the right hand side of Eq. (<ref>) can be replaced with p_sp. Applying Bayes' theorem again to the right hand side, we obtain
p_ph(D|C,c,m,l,b)=p_sp(D|C,c,m,l,b).
Therefore, similar to Eq. (<ref>), we infer that
ν_ph(D|C,c,m,l,b)=ν_sp(D|C,c,m,l,b)S^-1(C,c,m,l,b),
where
S(C,c,m,l,b)=∫_0^∞ν_sp(D|C,c,m,l,b)Ω D^2dD∫_0^∞ν_ph(D|C,c,m,l,b)Ω D^2dD.
And consequently,
ν_ph(D|C,l,b)=∬ν_sp(D|C,c,m,l,b)S^-1(C,c,m,l,b)dcdm.
§.§ Estimation of ν_sp
In Eqs (<ref>) and (<ref>), ν_ph, as the density profile for the complete dataset, is the one to be determined. We can estimate it by measuring ν_sp from the spectroscopic data. Usually, along a given line-of-sight, spectroscopic observations can only target a few hundreds to a few thousands of stars. This implies that ν_sp has to be established from a relatively small dataset. In order to deal with such a situation, we derive ν_sp by using the kernel density estimation (KDE). The KDE method can account for both the uncertainties in the distance and the relatively small number of stars in a given small region around (c, m). At (l, b), the PDF of D is contributed by all the stars around c and m in terms of
p_sp(D|c,m,l,b)=1n_sp(c,m,l,b)∑_i^n_sp(c,m,l,b)p_i(D),
where p_i(D) is the PDF of the distance estimate for the ith star and n_sp(c,m,l,b) is the number of the spectroscopic stars at a given c, m, l, and b.
Bringing Eq. (<ref>) into Eq. (<ref>), we obtain
ν_sp(D|c,m,l,b)=1Ω D^2∑_i^n_sp(c,m,l,b)p_i(D).
Note that ∫ν_sp(c,m,l,b)Ω D^2dD=n_sp(c,m,l,b), they are then cancelled in Eq. (<ref>).
In principle, the KDE determined ν is a continuous function and thus can read value at any D. However, if there are very few stars nearby D or the given D is beyond the farthest star, the derived ν is not well constrained by observations and thus may not be reliable any more. Therefore, in practice, the stellar density value is sensible only at the positions with sufficient star samples. For convenience, we only adopt ν at the distance in which the spectroscopic stars are located.
It is worthy to note that the stellar density derived for a given (l, b) is averaged over the solid angle of the field-of-view, Ω. Therefore, the derived density at the distance in which a star is located does not equal to the stellar density for the exact 3D spatial position of the star, but only an approximation of the latter.
§.§ Estimation of S
Theoretically, S is defined by Eqs. (<ref>) and (<ref>). Practically, it is not possible to calculate S directly from ν_ph and ν_sp. Notice that the integration of ν is actually equivalent with the total number of stars at given c, m, l, b. Therefore, S can be evaluated from
S(c,m,l,b)=n_sp(c,m,l,b)n_ph(c,m,l,b).
For the case of sub-population C, p_ph(C|c,m,l,b)=p_sp(C|c,m,l,b), we then have
n_ph(C,c,m,l,b)n_ph(c,m,l,b)=n_sp(C,c,m,l,b)n_sp(c,m,l,b).
Hence, we infer that S(C,c,m,l,b)=S(c,m,l,b).
In order to improve the precision of the computation in Eqs. (<ref>) and (<ref>), we calculate S^-1 instead of S in the rest of the paper.
§ VALIDATIONS WITH GALAXIA MOCK DATA
Before applying it to the real data, we validate the method using mock star catalog. We select a 20-square degree area toward the north Galactic pole from Galaxia simulation (Sharma et al. <cit.>) and obtain 10 987 mock stars with K magnitude brighter than 15 mag. The Galaxia mock catalog contains the distances, stellar parameters, and ages, which can be used to test whether the derived density profiles are precise for some specially selected stellar populations.
We test the method with two different selection functions, which are discussed separately in the following subsections.
§.§ Tests with the selection function T1
Before the tests, we construct the selection function to mimic the real spectroscopic survey. The selection of the mock spectroscopic stars is based on the infrared band J and K. First, we do not induce any selection in the color index J-K. Second, the selection function for K is divided into two parts at K=13 mag. At K<13 mag, we arbitrarily select stars, while at K>13 mag, we apply a flat selection function to keep the mock spectroscopic stars evenly distributed along K. This mimics the strategy that the targeting is more bias to the bright rather than to the faint sources. We denote such selection function as T1.
We randomly draw 50 groups of the mock spectroscopic samples according to T1 and derive the density profile for each group of samples. As a sample, the mock spectroscopic stars from one of the 50 groups are highlighted with red circles in Fig. <ref>, in which the black dots indicate the J-K vs. K distribution for the complete dataset. The right panel shows the normalized distributions of K for the mock spectroscopic stars (red line) and the complete dataset (black line), respectively. It can be seen that the red line shows a similar trend as the black one at K<13 and then become flat at K>13. This reflects the definition of the selection function T1. The normalized distributions of J-K in the bottom panel display that even though we do not apply any selection function to the color index, the normalized distribution of J-K for the mock spectroscopic stars (red line) is not exactly same as that for the full mock dataset (black line). This is a natural result of the selection function in K. There are more red stars in the regime of K>13 than at K<13. The selection strategy for K>13 reduces the sampling rate of the fainter and redder stars in the mock spectroscopic stars. This leads to a slightly different distribution in J-K with the complete dataset.
Fig. <ref> shows how the selection function S is calculated for the same group of the mock spectroscopic stars as in Fig. <ref>. In the left panel, the color shows the number density of stars for the complete sample, i.e., n_ph(J-K,K,l,b). The size of each bin is Δ(J-K)=0.1 and Δ K=0.25. It shows a high density at J-K∼0.8 and K∼14, which is most likely contributed by K/M dwarf stars. Applying the selection function T1, the map of the number density of mock spectroscopic stars, n_sp(J-K,K,l,b), is shown in the middle panel. The right panel shows the map of S^-1(J-K,K,l,b), which is obtained by dividing the left by the middle panel of Fig. <ref>. It is seen that S^-1(J-K,K,l,b) is roughly separated into two platforms, one is at K<13 with the value of about 5 and the other is at K>13 with the value of about 10. This pattern implies that the selection function T1 does not induce selection effects in J-K at each K bin, although the overall distributions in J-K (as shown in the bottom panel of Fig. <ref>) are slightly different than the complete dataset.
In order to mimic a more realistic situation, i.e., study the stellar density via a selected stellar population, we select 4 different populations defined in Table <ref> for tests. Table <ref> also lists the median numbers and scatter of the 50 groups of the arbitrarily selected mock spectroscopic samples based on T1 in the 3rd column. It is noted that for the sub-population T1S2, which is selected with M_K<-2, only about 40 stars are involved in the determination of the stellar density profile. This is a good sample to test the performance of the density estimates for a small dataset.
Fig. <ref> shows the distributions of star counts without a selection function correction for the complete set (black lines) and for one of the 50 groups of the mock spectroscopic samples (red lines), respectively. The 4 panels show the cases for the 4 different sub-populations, respectively. It displays that the selection function T1 does distort the spatial distributions of the mock spectroscopic stars from that of the complete samples.
The panels in Fig. <ref> show the derived median stellar densities and their 1σ dispersions (blue circles with error bars) over the 50 groups of the mock spectroscopic stars at the spatial positions in which the mock spectroscopic stars are located. In contrast, the corresponding stellar densities for the complete dataset, i.e., the “true” profiles, are superposed as black solid lines in the panels. In the bottom of each panel, the relative Δν/ν is also displayed. It is seen that the derived stellar density profiles from the mock spectroscopic data are perfectly consistent with the “true” profiles. This confirms that when the selection function is very simple, the stellar density can be well reconstructed. In particular for the case T1S2, the accuracy of the derived stellar density is still very high even though the sample contains only about 40 stars (see the top-right panel of Fig. <ref>).
§.§ Tests with selection function T2
We then change to a more complicated selection function. Because the LAMOST survey selects its targets using optical-band photometry, the bright/faint limiting magnitude cuts in an optical band lead to a sloped cut in K magnitude. Thus a rectangle region selected in optical color-magnitude diagram turns out to be a wedge region in IR-bands. Fig. <ref> shows the selection function of a sample plate[A plate in the LAMOST survey refers to an observed pointing along a line-of-sight. LAMOST has a field-of-view of 20 square degrees. Thus a plate covers the same solid angle. Because LAMOST contains 4000 fibers, a plate can simultaneously observe 4000 objects at most. In practice, a typical number of targets in a plate is around 2500, with some fibers dedicated to observe the sky and flux standard stars, plus a few fibers that are broken and a small fraction of fibers that are not used.] of the LAMOST survey. The middle panel shows that the spectroscopic stars are distributed in a wedge region in the J-K vs. K diagram. Thus the selection function S^-1 in the right panel also shows the similar shape.
To mimic such a selection function, we randomly select the stars satisfying the following criterion:
-3(J-K)+12<K<-3(J-K)+15.
We denote this selection function as T2. Fig. <ref> shows a randomly drawn sample based on T2 in the J-K vs. K diagram, in which the selected spectroscopic stars (red circles) are distributed in a wedge area. The median numbers of the stars and their scatters over the 50 groups of arbitrarily drawn mock spectroscopic stars based on T2 are separately listed in the last column of Table <ref> for the 4 sub-populations. Fig. <ref> shows the density of stars in the color-magnitude diagram for the same sample group of mock spectroscopic data as in Fig. <ref>. It is seen that the selected stars show a tilted distribution in the J-K vs. K plane, and so does the map of S^-1 according to the definition of T2.
Fig. <ref> shows the spatial distributions of the star counts without selection function correction for the 4 sub-populations from one of the 50 groups of mock spectroscopic stars (red lines). In contrast, the black lines indicate the distributions of the stars for the corresponding complete datasets. It can be seen that for T2S1, T2S2, and T2S3, distances further than 30 kpc are significantly under-sampled.
Fig. <ref> shows the derived median stellar density profiles and their 1σ dispersions over the 50 randomly drawn groups (blue circles with error bars). In general, the derived profiles are quite consistent with their corresponding “true” values within uncertainty of Δν/ν<1. However, a few systematic biases of about Δν/ν∼0.5–1 appear in the derived density profile: one occurs at the distance of ∼20-30 kpc for T2S1, T2S2, and T2S3, the other is at D∼2.5 kpc for T2S4.
Considering that properties of the Galactic spatial structures, such as the scale height/length of the disks and the power-law index of the stellar halo, are mostly measured in logarithmic density, such systematic biases may not bring severe effects. For instance, a simple exponential fit within 0.3–2.5 kpc gives a scale height of 219 pc with the true density profile of T2S4 (black line in the bottom-right panel of Fig. <ref>), while it decreases to 191 pc obtained from the corresponding derived stellar density profile (blue circles in the same panel). This means that the derived density based on T2 may induce a systematically smaller exponential scale height by only ∼13%.
For a realistic survey such as LAMOST, many plates are overlapped with each other. And the selection function for different plates varies. Then the possible systematic bias of the stellar density measurement in one plate, as shown in Fig. <ref>, may be compensated by other overlapped plates. Consequently, the cummulation of the systematic bias from each individual plate may not induce the overall systematic bias to the final density profiles, but may increase the random uncertainty in the resulting density profiles.
§ THE STELLAR DENSITY PROFILE OF THE MILKY WAY
§.§ The selection function of the LAMOST survey
In general, the targeting strategy of the LAMOST survey is rather simple. Carlin et al. (<cit.>) has designed an elegant targeting algorithm, which tries to make the distribution of the target stars flat in color-magnitude planes. However, in practice, this technique has not been fully used for a few reasons. First, the dynamical range of magnitudes observed in a single LAMOST plate is limited to only about 3 magnitudes so that bright spectra will not saturate or cross-talk between fibers. Second, the actual limiting magnitude of r∼18 mag is brighter than the designed goal by at least 2 mag. Both situations lead to insufficient source stars for targeting. For instance, the bright plates for LAMOST survey cover 14<r<16.8 mag, which may only contain less than 200 stars per square degree at high Galactic latitudes. Such a sparse stellar sampling is not sufficient to apply the selection function designed by Carlin et al. (<cit.>) for targeting, because the targeting algorithm needs far more source stars than available fibers to achieve a flat distribution of targets in color-magnitude plane.
Finally, the LAMOST survey separated the targets into different plates with different ranges of magnitudes at each line-of-sight. The VB plates cover 9<r<14, the B plates cover 14<r<16.8, the M plates cover r<17.8, and the F plates cover r<18.5. For the VB and B plates, no specific selection function was applied, i.e., the stars are randomly selected. For the M and F plates, we only applied the selection function in r magnitude, and leave the selection in color index to be arbitrary.
This simplified selection function was applied to all of the sky area observed by LAMOST except for the Galactic anti-center region, which covers 150^∘<l<210^∘ and |b|<30^∘. For the main survey regions, the UCAC4 catalog (Zacharias et al. <cit.>) with r<14 mag and PanSTARRS-1 catalog (Schlafly et al. <cit.>) with r>14 were adopted as the source catalogs for targeting. For the Galactic anti-center region, the selection strategy follows Yuan et al. (<cit.>), and uses 2MASS, UCAC4, and Xuyi survey catalogs (Liu X.-W. et al. <cit.>) as the sources for targeting.
Because the different source catalogs are based on different photometric systems, they can not be unified unless they are cross-calibrated with each other. To avoid the complicated calibrations between at least 4 different systems, we finally adopt the 2MASS catalog, which covers most of the LAMOST observed stars, as the photometric dataset. Indeed, for the LAMOST K giant stars, about 96% of them are within K<14.3 mag, which is the limiting magnitude of the 2MASS catalog.
Subsequently, we use J-K vs. K map to derive the selection function S for each observed plate of the LAMOST survey. The selection function S^-1 for all LAMOST DR3 plates and the python codes used for the measurement of the stellar density profiles can be found at https://github.com/liuchaonaoc/LAMOST_densityhttps://github.com/liuchaonaoc/LAMOST_density.
§.§ LAMOST RGB stars
In order to derive the density profiles for the stellar disk and halo of the Galaxy, we select red giant branch (RGB) stars. Although RGB stars are not the standard candles as the red clump stars, the accuracy of the distance for the former ones is still around 20-30% (Liu et al. <cit.>, Carlin et al. <cit.>) is sufficient for mapping Galactic structure in the outer regions. Moreover, RGB stars are intrinsically brighter and occur in stellar populations with a broader range of metallicity than red clump stars. Therefore, they are suitable tracers for the outer disk and the halo, which are dominated by the metal-poor stars.
For LAMOST survey data, we adopt the stellar parameters, i.e., the effective temperature, surface gravity, and metallicity, provided by the LAMOST pipeline (Wu et al. <cit.>, <cit.>, Luo et al. <cit.>). The K giant stars, including the RGB and red clump stars, are selected based on the criteria proposed by Liu et al. (<cit.>). Red clump stars are identified by Wan et al. (<cit.>) and Tian et al. (<cit.>). The RGB stars are then selected from the K giant stars by excluding the red clump stars.
The absolute magnitudes at K_s band, M_Ks, for the RGB stars are estimated following the method suggested by Carlin et al. (<cit.>). Interstellar extinction for the individual RGB stars is estimated using the Rayleigh-Jeans color excess (RJCE) approach introduced by Majewski et al. (<cit.>) and later revised to incorporate WISE bandpasses by Zasowski et al. (<cit.>). Subsequently, the distances to the RGB stars are estimated by combining the apparent K_s magnitude, the absolute magnitude M_Ks, and the RJCE extinction A_Ks.
The RJCE extinction is compared with the 3-D extinction map provided by Green et al. (<cit.>, <cit.>), who derived the 3-D dust map from the PanSTARRS-1 data. Fig. <ref> shows the RJCE extinction vs. distance for the RGB stars located at (150^∘<l<160^∘, |b|<2^∘), (170^∘<l<190^∘, |b|<2^∘), and (200^∘<l<210^∘, |b|<2^∘) using blue circles, red crosses, and black pluses, respectively. The Green et al. mean extinction-distance relations at the similar lines-of-sight are superposed in the same figure. We see that the RJCE extinctions for individual RGB stars are roughly consistent with Green's 3-D extinction map within about 0.2 mag, which may lead to only about 10% uncertainty in distance. Therefore, the distances for the RGB stars should not be substantially affected by extinction, even in the Galactic mid-plane.
In the following subsections we demonstrate two samples: 1) RGB stars with all ranges of metallicity as the tracers for the Milky Way structures, especially for the Galactic disk, and 2) metal-poor RGB stars as tracers for the stellar halo.
§.§ The stellar disk
We select 21,954 RGB stars with M_Ks<-3.5 mag and distances larger than 0.5 kpc as probes to map the Galaxy. Because these samples are dominated by the metal-rich stars, they mostly trace the Galactic disk. The volume completeness for these RGB stars are within 40 kpc. The volume completeness in this paper is defined such that the range of the luminosity of the stars does not significantly change within the volume. Plate by plate, we apply the method to derive the stellar density and finally obtain density values at the distances at which sample stars are located. Table <ref> lists the first three rows of the samples. The columns in the table are described in Table <ref>. The full RGB data can be found at https://github.com/liuchaonaoc/LAMOST_densityhttps://github.com/liuchaonaoc/LAMOST_density.
Note that the samples contain about 20% of stars that have been observed more than twice. Because the duplicated stars were observed in different plates and the selection effects for each observation can be considered separately, the duplication would not affect the stellar density estimation. In fact, these data can be used to test the internal uncertainty, or the precision, of the distance and stellar density estimations, respectively. The top panel of Fig. <ref> shows the distribution of the relative scatter of the distance estimates (black line) for the stars observed multiple times. The scatter is defined by the standard deviation of the distance. The relative scatter is the scatter divided by the mean distance estimate. The distribution of the relative scatter of distance is fitted with a Gaussian (red dashed line) with the best fit parameter of σ=0.038, while the median value of the distribution is 0.028. This means that the internal uncertainty of the distance estimates for the RGB stars is around 3-4%.
The bottom panel of Fig. <ref> shows the distribution of the relative scatter of stellar density (black line) for the duplicated stars. The definitions of the scatter and relative scatter are similar to those for the distance estimates. The red dashed line stands for the best fit Gaussian with σ=0.37. The median value for the distribution is 0.25. This is better than the performance of the mock data test shown in Sect. <ref> and <ref>, which may reach Δν/ν>0.5 in the worst cases. The difference in the derived stellar densities for the multiple-observed stars is originated from two possible channels: 1) the different selection corrections from the different plates observing the same star and 2) the different averaging sky areas since the plates observing the same star may not be completely overlapped with each other. The small dispersion of the relative scatter implies that these differences are very small for most of the stars.
Fig. <ref> demonstrates the averaged stellar density map in R–Z plane[R and Z are Galactocentric cylindrical coordinates with adopted solar position at R_0=8 kpc and Z_0=0.027 kpc (Chen et al. <cit.>)]. The bin size is 0.5×0.5 kpc. Because this population is dominated by metal-rich stars, it displays a remarkable tomographic map of the Galactic disk from R∼4 to ∼20 kpc. Two prominent features are seen in this figure. First, the Monoceros ring, unveiled by Newberg et al. (<cit.>) and later highlighted in SDSS star counting (Jurić et al. <cit.>) and metallicity distribution mapping (Ivezić et al.<cit.>), does not show up in our map of the whole Galactic outer disk with LAMOST RGB stars. Second, the iso-density contours (black lines) show moderate north-south asymmetry in the outer part of the disk starting from R∼12 kpc. The stellar density above the mid-plane is larger than that below at given radii. This is essentially consistent with Xu et al. (<cit.>). The asymmetry about the mid-plane may also be the result of the warp. If that is the case, the stellar disk may bend up to some extent in the Galactic anti-center region between 150^∘<l<210^∘, in which the LAMOST disk data are mostly concentrated. Both features are very interesting and will be further discussed in an upcoming paper (Wang et al. in preparation).
In Fig. <ref>, the black circles show the stellar surface density derived by integrating the volume density over Z in each R slice. In order to avoid issues due to the lack of data in the southern Galactic hemisphere, we use |Z| instead of Z. That is, the mean stellar density at given |Z| is averaged over the values at both +Z and -Z. Only |Z|<40 kpc are included in the integration. The width of each R slice is 1 kpc. For some bins without stellar density estimates, interpolated values are used. The red and blue dashed lines are best-fit exponential disk and power-law halo profiles, respectively, and the green dot-dashed line is the sum of the two profiles. We find that the surface density profile can be well fitted with an exponential disk with scale length of 1.6±0.1 kpc and a power law halo with index of 2.4±0.3 Note that this power-law index is not equivalent to that derived from spherical coordinates, since it is derived from the surface density as a function of the cylindrical radius R.
The fraction of stars contributed by the halo, as derived from our models of the surface density, is indicated as the black line aligned with the right y-axis in the figure. It shows that, at R=14 kpc, about half of the surface density is contributed by the disk. Therefore, no evidence is found to support that there is a truncation at around 14 kpc. Moreover, at R=19 kpc, the disk still contributes ∼10% to the total stellar surface density, which cannot be negligible because the data points (black circles) are clearly larger than the halo model (blue lines) around the radius. In other words, the disk extends to as far as 19 kpc. Beyond this radius, the observed surface density smoothly transitions to a pure halo profile. The perfect fitting with an exponential disk model implies that the disk displays neither truncation, nor break, nor up-bending feature.
The extended larger size of the disk is not likely to be an artificial effect due to the incorrect interstellar extinction, according to the discussion in sub-section <ref>. On the other hand, Carlin et al. (<cit.>) pointed out that the distances may be systematically overestimated for α-enhanced giant stars by about 15%. However, Hayden et al. (<cit.>) shows that it is the α-low giant stars who dominate the outer disk. Consequently, the systematic bias in distance due to the α abundance may not significantly affect the distance for the outer disk RGB stars. Therefore, we suggest that the largely extended disk without truncation should be real.
The Galactic disk measured in this work is significantly larger than some other works, which claimed that the Milky Way stellar disk truncates at around 14–15 kpc (e.g. Robin et al. <cit.>, Minniti et al. <cit.>). Carraro (<cit.>) pointed out that the lines-of-sight investigated by these works are very close to b∼0 and may not be aligned with the disk plane in the outskirts due to the existence of the warp. In this case, at some radius the in-plane lines-of-sight no longer probes the disk, leading to a false truncation at that radius. It is worth to note that Carraro et al. (<cit.>) and Feast et al. (<cit.>) found some young stars at around 20 kpc from the Galactic center, which is in agreement with our suggestion that the disk extends to such radii. We also point out that Xu et al. (<cit.>) has suggested that the disk may extend all the way to large radii. More quantitative analysis will appear in upcoming work by Wang et al. (in preparation).
§.§ The stellar halo
In this section we select 5,171 halo-like RGB stars with [Fe/H]<-1 and M_Ks<-4 from the LAMOST DR3 catalog to map the stellar density profile for the stellar halo. The cut in K_s-band absolute magnitude ensures that the data is roughly complete within 50 kpc from the Galactic center. The first three rows of the sample are listed in Table <ref>. The columns in the table are explained in Table <ref>. The full catalog can be found at https://github.com/liuchaonaoc/LAMOST_densityhttps://github.com/liuchaonaoc/LAMOST_density. We then derive the stellar densities at the spatial positions at which the halo-like RGB stars are located.
Fig. <ref> shows the averaged stellar density map in the R–Z plane for the halo population. The metal-poor RGB stars well probe the shape of the halo within a Galactiocentric radius of 50 kpc. The contours of lnν=-12.5 and -13.5 in the figure show clear oblate shapes within 20 kpc. The iso-density contour at -14.5 displays a slightly larger axis ratio, although it is still oblate. Beyond 30 kpc, the contours of -15.5 and -16.5 are roughly spherical. Although Xue et al. (<cit.>) claimed that a single power-law halo with variable axis ratio can better fit their data than the broken power-law, it is probably for the first time, to our knowledge, that the variable axis ratio from inner to outer halo is directly illustrated in the tomographic map.
We further demonstrate how the broken power-law comes out with the assumption of a constant axis ratio. Fig. <ref> shows lnν as a function of r for the points shown in Fig. <ref> with different fixed values of the axis ratio. The top panel presumes the axis ratio q=1.0. Thus r is the same as the Galactocentric radius in spherical coordinates. The lnν profile is fitted with a broken power-law. Within ∼40±2.8 kpc, the best fit power-law index is -2.9±0.1, while beyond this it is down to -5.0±1.2. In the bottom panel, q is set to be 0.75. In this case, the broken power-law profile shown in the top panel almost disappears. Instead, it shows that the two power-law indices in the inner and outer halo are similar, i.e., a single power-law may be better to fit the data. This means that the presumption of the axis ratio can significantly change the radial profile of lnν. More quantitative study about the structure of the stellar halo with the LAMOST RGB samples can be found in Xu et al. (in preparation).
§ DISCUSIONS
§.§ The plate-wide smoothing density profile
In this work, we derive the density profile along a line-of-sight for each LAMOST plate. This implies that the stellar density estimate is smoothed over the solid angle of 20 square degrees, since the field-of-view of the LAMOST telescope is 5 degrees in diameter. Note that the stellar density profile may dramatically change from one end to another in such a large sky area, especially when the distance is large. As a consequence, the plate-wide smoothing density profile may smear out or blur the structure of the Milky Way to some extent. In this section we investigate the influence of the plate-wide smoothing to the density profile with toy models.
We adopt a toy model of the vertical stellar density profile with two exponential components: the thin disk component has the scale height of 0.3 kpc and the thick disk has the scale height of 0.9 kpc. At Z=0, the thick disk component is 15% of the thin one. The normalizer, i.e. the stellar density at Z=0 is set to 0.05 pc^-3. Such a configuration is similar to the Milky Way disk in the solar neighborhood.
Considering that the LAMOST plates usually overlap with each other in most of the sky areas, we draw mock stars from the toy model in the partly superposed LAMOST-like plates, i.e. circles with 5-degree-field-of-view, centered at b=0^∘, 1^∘, 2^∘, …, 12^∘, 22^∘, …, 82^∘ with l=180^∘. Obviously, these plates correspond to different vertical height above the Galactic mid-plane. The larger the distance, the larger the vertical height.
For each plate, at a given Galactocentric radius, we randomly draw 50 mock stars evenly distributed within the circle area of the plate. Note that, even though the mock stars have same radius, their Z values are different because they are distributed at various Galactic latitudes within the plate. According to Section <ref>, the stellar densities at the positions of the mock stars are derived by averaging over the model density covering the area of the plate at the given radius. Thus, although the Z values are different for the mock stars drawn from the same plate at the given radius, the stellar densities assigned to them are same. This is the similar but simplifed way to determine the stellar density as for the real observed data applied in Section <ref>. In spite that the uncertainties of the distance estimates and the selection function corrections are ignored, the simplied method to the stellar density determination is sufficient for investigating the effect induced by the plate-wide smoothing.
§.§.§ The smoothing effect in the vertical disk profile
We first investigate the effect in the vertical density profile. Fig. <ref> (a) shows the result of the vertical density profile test at R=12 kpc. The black solid line indicates the “true” profile from the toy model. The red filled circles represent the averaged density values over the mock stars at different Z bins. The red dashed line displays the best-fit double-exponential profile with the red symbols. It is seen that the derived stellar densities perfectly follow the “true” stellar density profile and the best-fit parameters are quite consistent with the “true” values with uncertainties less than 20%.
We then move to larger distance at R=20 kpc, where the effect of the plate-wide smoothing should be more substaential. The result in Fig. <ref> (b) shows that the shape of the best-fit density profile (red dashed line) derived from the “observed” data (red filled circles) is in agreement with the “true” profile (black solid line). Similar to the test at R=12 kpc, the uncertainty of the derived model parameters is less than 20% without substential systematic bias. In both cases at R=12 and 20 kpc, the plate-wide smoothing due to the large field-of-view does not significantly change the result.
§.§.§ The smoothing effect in Monoceros ring-like substructure
In section <ref>, the well known Monoceros ring is not substentially exhibited in the outer Galactic disk, according to the stellar density profile estimates. Could it be possible that the smoothing in the large field-of-view of the LAMOST plate smears out such kind of substructure? We superpose an additional Gaussian-like substructure with the peak located at 1.5 kpc and σ of 0.3 kpc to the toy model with a double-exponential vertical profile. The peak density of the Gaussian-like substructure is 20% of the disk density at Z=0. The analytic form of the Gaussian-like substructure is indicated in the second text line in Fig. <ref> (c). The “true” vertical density profile with the Gaussian-like substructue located at R=20 kpc is shown as the black solid line in Fig. <ref> (c). The red filled circles in the figure are the approximated density values obtained from the mock data. Compared with the derived best-fit stellar density in the panel (b) (red dashed line), it is seen that the approximated stellar density values do show a broader bump between Z=1 and 2 kpc, which corresponds to the Gaussian-like substructure. This means that even the substructure has a relatively smaller typical scale of 0.3 kpc it can still be identified after the spatial smoothing induced by the large field-of-view, although the resulting substructure may be broadened to some extent. If the substructure is very weak, however, its signal may also be weakend by the broadening effect.
§.§.§ Summary
To summarize, the stellar density profile derived from the spatial averaging over the 5-degree-field-of-view may not significantly distort the derived Galactic structure. Meanwhile, although the smoothing with relatively large sky area may broaden the existing substructures, small-scale substructure with typcal scale of 0.3 kpc can still be identified from the resulting stellartdensity profile. Therefore, the disappearing of the Monoceros ring in our result may not be due to the spatial smoothing induced by the applied technique.
§.§ The post-observation selection function
The selection function used in this work is determined by comparison between the color-magnitude diagram from the photometric data and that from the final spectroscopic catalog. The selection function in the latter one is not only composed of the targeting selection, but also consider the selection effects induced during obserations and data reductions.
The original selection function is always altered during the observations and data processing. For instance, one line-of-sight is designed to be observed multiple times with different ranges of magnitudes so that a larger range of magnitude can be covered. But due to the weather, some designed plates may never be observed. Then the post-observation selection function for the line-of-sight would never be identical to the designed one. Another instance is that the data reduction may lose the spectra with very low signal-to-noise ratio, which may more frequently occur for fainter sources. Therefore, only using the originally designed targeting selectuion for the correction cannot take into account the effects from observations and data reductions.
§ CONCLUSIONS
In this paper, we introduce a statistical method to measure the stellar density profile along a given line-of-sight using spectroscopic survey data. This technique can flexiblly deal with the stellar density for different stellar populations.
Our validation tests based on Galaxia mock data demonstrate that we can well reconstruct the stellar density profile for a sub-population. Moreover, even if the sub-population contains quite few stars, we can still obtain reasonable density values at the spatial positions at which the stars are located. The tests show that in the worst cases, the derived stellar density may have a systematic bias of about Δν/ν∼1. However, large surveys like LAMOST can observe many plates covering a wide area of the sky. Thus, statistically, the final averaged stellar density in the small discrepant regions can reduce or cancel the different systematic biases occurring in different lines-of-sight, and finally achieve relatively high precision when averaged over large areas.
Finally, we apply the method to the LAMOST DR3 RGB stars. For the Galactic outer disk, we find that 1) the disk component still contributes to the total stellar surface density by about 10% at R=19 kpc, implying that our Galaxy has a quite large stellar disk; 2) we do not observe any significant truncation of the disk, but only see a smooth transition from the disk to the halo at around 20 kpc; 3) the Monoceros ring is not seen in our density map of the outer disk; and 4) the disk seems vertically asymmetric beyond R∼12 kpc.
We confirm that the stellar halo is oblate in the inner halo (within r∼30 kpc) and becomes spherical in the outer part. If we presume a constant axis ratio of 1.0, the stellar density profile shows a clear broken power-low similar to some previous works (e.g., Deason et al. <cit.>). However, if we change the constant axis ratio to 0.75, the broken power-law disappears. This means that whether the halo profile is broken or not depends on the axis ratio. The tomographic map of the stellar halo displayed in this work shows evidence that the halo has a radially varying axis ratio. Therefore, either using a single or broken double power-law with constant axis ratio does not properly reflect the real structure of the stellar halo.
We thank the anonymous referee for the very helpful comments. We thank Li Chen and Jing Zhong for helpful discussions. This work is supported by the Strategic Priority Research Program “The Emergence of Cosmological Structures" of the Chinese Academy of Sciences, Grant No. XDB09000000 and the National Key Basic Research Program of China 2014CB845700. C. L. acknowledges the NSFC under grants 11373032 and 11333003. Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences.
99
[2012]bovy2012 Bovy, J., Rix, H.-W., Liu, C., et al. 2012, , 753, 148
[2016]bovy2016 Bovy, J., Rix, H.-W., Schlafly, E. F., et al. 2016, , 823, 30
[2016]bland2016 Bland-Hawthorn, J., & Gerhard, O. 2016, , 54, 529
[2012]carlin2012 Carlin, J. L., Lépine, S., Newberg, H. J., et al. 2012, Research in Astronomy and Astrophysics, 12, 755
cted: 2.
[2015]carlin2015 Carlin, J. L., Liu, C., Newberg, H. J., et al. 2015, , 150, 4
[2010]carraro2010 Carraro, G., Vázquez, R. A., Costa, E., Perren, G., & Moitinho, A. 2010, , 718, 683
[2015]carraro2015 Carraro, G. 2015, Boletin de la Asociacion Argentina de Astronomia La Plata Argentina, 57, 138
[2001]chen2001 Chen, B., Stoughton, C., Smith, J. A., et al. 2001, , 553, 184
[2012]cui2012Cui, X.-Q., Zhao, Y.-H., Chu, Y.-Q., et al. 2012, Research in Astronomy and Astrophysics, 12, 1197
[2011]deason2011 Deason, A. J., Belokurov, V., & Evans, N. W. 2011, , 416, 2903
[2012]deng2012Deng, L.-C., Newberg, H. J., Liu, C. et al. 2012, Research in Astronomy and Astrophysics, 12, 735
[1983]gilmore1983 Gilmore, G., & Reid, N. 1983, , 202, 1025
[2014]feast2014 Feast, M. W., Menzies, J. W., Matsunaga, N., & Whitelock, P. A. 2014, , 509, 342
[2014]green2014 Green, G. M., Schlafly, E. F., Finkbeiner, D. P., et al. 2014, , 783, 114
[2015]green2015 Green, G. M., Schlafly, E. F., Finkbeiner, D. P., et al. 2015, , 810, 25
[2015]hayden2015 Hayden, M. R., Bovy, J., Holtzman, J. A., et al. 2015, , 808, 132
[2008]ivezic2008 Ivezić, Ž., Sesar, B., Jurić, M., et al. 2008, , 684, 287-325
[2008]juric2008 Jurić, M., Ivezić, Ž.,
Brooks, A., et al. 2008, , 673, 864-914
[2012]liu2012 Liu, C., & van de Ven, G. 2012, , 425, 2144
[2014]liu2014 Liu, C., Deng, L.-C., Carlin, J. L., et al. 2014, , 790, 110
[2014]liux2014 Liu, X.-W., Yuan, H.-B., Huo, Z.-Y., et al. 2014, Setting the scene for Gaia and LAMOST, 298, 310
[2015]liu2015 Liu, C., Fang, M., Wu, Y., et al. 2015, , 807, 4
[2002]lopezcorredoira2002 López-Corredoira, M., Cabrera-Lavers, A., Garzón, F., & Hammersley, P. L. 2002, , 394, 883
[2015]luo2015 Luo, A.-L., Zhao, Y.-H., Zhao, G., et al. 2015, Research in Astronomy and Astrophysics, 15, 1095
[2011]majewski2011 Majewski, S. R., Zasowski, G., & Nidever, D. L. 2011, , 739, 25
[2011]minniti2011 Minniti, D., Saito, R. K., Alonso-García, J., Lucas, P. W., & Hempel, M. 2011, , 733, L43
[2002]newberg2002 Newberg, H. J., Yanny, B., Rockosi, C., et al. 2002, , 569, 245
[1992]robin1992 Robin, A. C., Creze, M., & Mohan, V. 1992, , 400, L25
[2006]strutskie2006 Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, , 131, 1163
[2011]sharma2011 Sharma, S., Bland-Hawthorn, J., Johnston, K. V., & Binney, J. 2011, , 730, 3
[2012]schlafly2012 Schlafly, E. F., Finkbeiner, D. P., Jurić, M., et al. 2012, , 756, 158
[2016]tian2016 Tian, H.-J., Liu, C., Wan, J.-C., et al. 2016, arXiv:1603.06262
[2015]wan2015 Wan, J.-C., Liu, C., Deng, L.-C., et al. 2015, Research in Astronomy and Astrophysics, 15, 1166
[2009]watkins2009 Watkins, L. L., Evans, N. W., Belokurov, V., et al. 2009, , 398, 1757
[2012]widrow2012 Widrow, L. M., Gardner, S., Yanny, B., Dodelson, S., & Chen, H.-Y. 2012, , 750, L41
[2011]wu2011 Wu, Y., Luo, A.-L., Li, H.-N., et al. 2011, Research in Astronomy and Astrophysics, 11, 924
[2014]wu2014 Wu, Y., Du, B., Luo, A., Zhao, Y., & Yuan, H. 2014, Statistical Challenges in 21st Century Cosmology, 306, 340
[2016]xia2016 Xia, Q., Liu, C., Mao, S., et al. 2016, , 458, 3839
[2015]xue2015 Xue, X.-X., Rix, H.-W., Ma, Z., et al. 2015, , 809, 144
[2006]xu2006 Xu, Y., Deng, L. C., & Hu, J. Y. 2006, , 368, 1811
[2007]xu2007 Xu, Y., Deng, L. C., & Hu, J. Y. 2007, , 379, 1373
[2015]xu2015 Xu, Y., Newberg, H. J., Carlin, J. L., et al. 2015, , 801, 105
[2012]yang2012 Yang, F., Carlin, J. L., Liu, C., et al. 2012, Research in Astronomy and Astrophysics, 12, 781
[2012]yao2012 Yao, S., Liu, C., Zhang, H.-T., et al. 2012, Research in Astronomy and Astrophysics, 12, 772
[2015]yuan2015 Yuan, H.-B., Liu, X.-W., Huo, Z.-Y., et al. 2015, , 448, 855
[2013]zacharias2013 Zacharias, N., Finch, C. T., Girard, T. M., et al. 2013, , 145, 44
[2013]zasowski2013 Zasowski, G., Johnson, J. A., Frinchaboy, P. M., et al. 2013, , 146, 81
[2013]zhang2013 Zhang, L., Rix, H.-W., van de Ven, G., et al. 2013, , 772, 108
[2012]zhao2012Zhao, G., Zhao, Y. H., Chu, Y. Q., et al. 2012, RAA, 12, 723
§ TEST THE METHOD WITH SIMULATION DATA AT L=180^∘ AND B=0^∘
In this section we further provide an additional test for the method using the Galaxia simulation data located at l=180^∘ and b=0^∘, at which the extinction is significantly large, with field of view of 20 squared degrees. Fig. <ref> shows the test result with the selection function T1, while Fig. <ref> shows the test result with the selection function T2. Under both selection functions, the performance of the stellar density estimates for the stars with higher extinction do not show significant difference with the previous tests, as shown in Figs. <ref> and <ref>.
|
http://arxiv.org/abs/1701.07485v4 | 20170125210511 | Relay-Assisted Mixed FSO/RF Systems over Málaga-$\mathcal{M}$ and $κ$-$μ$ Shadowed Fading Channels | [
"Imène Trigui",
"Nesrine Cherif",
"Sofiène Affes"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Relay-Assisted Mixed FSO/RF Systems over Málaga-ℳ and κ-μ Shadowed Fading Channels
Imène Trigui, Nesrine Cherif, and Sofiène Affes
INRS-EMT, 800, de la Gauchetière Ouest, Bureau
6900, Montréal, H5A 1K6, Qc, Canada.
{itrigui, nesrine.cherif, affes}@emt.inrs.ca
December 30, 2023
=========================================================================================================================================================================================
This letter presents a unified analytical framework for the computation of the ergodic capacity and the outage probability of relay-assisted mixed FSO/RF transmission. In addition to accounting for different FSO detection techniques, the mathematical model offers a twofold unification of mixed FSO/RF systems by considering mixed Málaga-ℳ/κ-μ shadowed fading, which includes as special cases nearly all linear turbulence/fading models adopted in the open literature.
§ INTRODUCTION
Recently, free-space optical (FSO) communications have gained a significant attention due to their advantages of higher bandwidth in unlicensed spectrum and higher throughput compared to their RF counterparts <cit.>. Hence, the gathering of both FSO and RF technologies arises as a promising solution for securing connectivity between the RF access and the fiber-optic-based backbone networks. As such, there has been prominent interest in mixed FSO/RF systems where RF transmission is used at one hop and FSO transmission at the other <cit.>-<cit.>. Most contributions within this research line consider restrictive irradiance and channel gain probability density function (PDF) models for the FSO and RF links, respectively. The most commonly utilized models for the irradiance in FSO links are the lognormal and the Gamma-Gamma (𝒢-𝒢) (<cit.>,<cit.> and references therein). Recently, a new generalized statistical model, the Málaga-ℳ distribution, was proposed in <cit.> to model the irradiance fluctuation of an unbounded optical wavefront propagating through a turbulent medium under all irradiance conditions. Characterized in <cit.> as a mixture of Generalized-K and discrete
Binomial distributions, the Malága-ℳ distribution unifies most statistical models
exploited so far and is able to better reflect a wider range
of turbulence conditions<cit.>, <cit.>. On the RF side, previous works typically assume either Rayleigh or Nakagami-m fading <cit.>,<cit.>, thereby lacking the flexibility to account for disparate signal
propagation mechanisms as those characterized in 5G communications which will accommodate a wide range of usage scenarios with diverse link requirements. To bridge this gap in the literature, the κ-μ shadowed fading model, recently derived in <cit.>, is an attractive proposition. In addition to offering an excellent fit to the fading observed in a range of real-world applications (e.g. device-to-device, and body-centric fading channels <cit.>), the κ-μ shadowed fading encompasses several RF channel models such as Nakagami-m, Rayleigh, Rice, κ-μ and shadowed Rician fading distributions. This new channel fading model offers far better and much more flexible representations of practical fading LOS (line of sight), NLOS (non-LOS), and shadowed channels than the Rayleigh and Nakagami-m distributions.
Under the assumption of AF relaying and taking into account the effect
of pointing errors while considering both heterodyne and intensity modulation/direct (IM/DD) detection techniques, we derive closed-form expressions
for the ergodic capacity and outage probability of dual-hop
FSO/RF systems over Málaga-ℳ/κ-μ shadowed channels. We further pursue high signal-to-noise
ratio (SNR) analysis
to derive the diversity order.
§ CHANNEL AND SYSTEM MODELS
We consider a relay-assisted mixed FSO/RF transmission composed of both Málaga-ℳ with pointing errors and κ-μ shadowed fading environments. The source communicates with the destination through an intermediate relay, able to activate both heterodyne and IM/DD detection techniques at the reception of the optical beam.
The FSO (S-R) link irradiance is assumed to follow a Málaga-ℳ distribution with pointing errors impairments for which the PDF of the irradiance, I, is given by <cit.>
f_I(x)=ξ^2A/xΓ (α)∑_k=1^βb_k/Γ(k) G_1,3^3,0[αβ/g β+Ωx/A_0| ξ^2+1 ξ^2,α,k],
where ξ is the ratio between the equivalent beam radius and the pointing error displacement standard deviation (i.e., jitter) at the relay (for negligible pointing errors ξ→∞), A_0 defines the pointing loss <cit.>,
A=α^α/2[ g β/(g β +Ω)] ^β +α/2g^-1-α/2 and b_k=β-1k-1(g β+Ω)^1-k/2[ (g β +Ω)/αβ] ^α+k/2( Ω/g)^k-1(α/β)^k/2, where α, β, g, and Ω are the fading parameters related to the atmospheric turbulence conditions <cit.>. Moreover in (<ref>), G_p, q^m, n [·] and Γ(·) stand for the Meijer-G <cit.> and the incomplete gamma <cit.> functions, respectively. It is worth highlighting that the ℳ distribution unifies most of the proposed statistical models characterizing the optical irradiance in homogeneous and isotropic turbulence <cit.>. Hence both 𝒢-𝒢 and K models are special cases of the Málaga-ℳ distribution, as they mathematically derive from (<ref>) by setting (g=0, Ω=1) and (g≠0, Ω=0 or β=1), respectively <cit.>.
The RF (R-D) link, experiences the κ-μ shadowed fading with non-negative real shape parameters κ, μ and m, for which the PDF of instantaneous SNR, γ_2, is given by <cit.>
f_γ_2(x) = μ^μ m^m(1+κ)^μ/Γ(μ) γ̅_2 (μκ+m)^m(x/γ̅_2)^μ-1e^-μ(1+κ) x/γ̅_2
× _1 F_1(m,μ;μ^2κ (1+κ)/μκ+mx/γ̅_2),
where _1 F_1(·) is the confluent hypergeometric function <cit.> and γ̅_2=[γ_2]. This fading model jointly includes
large-scale and small-scale propagation effects, by considering
that only the dominant components (DSCs) are affected by Nakagami-m distributed shadowing <cit.>. The shadowed κ-μ distribution is an extremely versatile fading model that includes as special cases nearly all linear fading models pertaining to LOS and NLOS scenarios, such as κ-μ (m→∞), Nakagami-m (μ=m and κ→ 0), Rayleigh (μ=m=1 and κ→ 0 ), and Rice (μ=1, κ=K and m→∞), to name a few <cit.>.
Assuming AF relaying with channel state information (CSI), then the end-to end SNR can be expressed as<cit.>
γ=γ_1 γ_2/γ_1+γ_2+1,
where γ_1=(A_0h(g+Ω))^-rμ_rI^r is the instantaneous SNR of the FSO link
(S - R) with r being the parameter that describes the detection technique at the relay (i.e., r = 1
is associated with heterodyne detection and r = 2 is associated
with IM/DD) and, μ_r refers to the electrical SNR of the FSO
hop <cit.> and h=ξ^2/(ξ^2+1). In particular, for r = 1, μ_1=μ_heterodyne=[γ_1]=γ̅_1 and for r=2, μ_2=μ_IM/DD=μ_1αξ^2(ξ^2+1)^-2(ξ^2+2)(g+Ω)/((α+1)[2g(g+2Ω)+Ω^2(1+1/β)]) <cit.>.
§ EXACT PERFORMANCE ANALYSIS
In this section, a new mathematical framework investigating the average capacity and the outage probability of the mixed FSO/RF transmission composed of both Málaga-ℳ with pointing errors and shadowed
κ-μ fading environments and accounting for both detection techniques is presented. To the best of the author's knowledge, there are few works that consider these metrics of mixed FSO/RF systems, yet mostly considering the mixed 𝒢-𝒢/Nakagami-m fading (<cit.>,<cit.> and references therein). This paper completes and extends the efforts of <cit.>-<cit.> by unifying the ergodic capacity and the outage probability analysis for any turbulence/fading model under both types of detection techniques.
§.§ Ergodic Capacity
Hereafter, we provide capacity formulas for the considered system by using the complementary moment
generation function CMGF-based approach <cit.> as
CΔ= [ln (1+γ)]/2ln (2)=1/2ln (2)∫_0^∞se^-sM^(c)_γ_1(s)M^(c)_γ_2(s)ds,
where M^(c)_X(s)=∫_0^∞e^-sxF^(c)_X(x) dx stands for the CMGF with F^(c)_X(x) denoting the complementary cumulative distribution function (CCDF) of X.
The ergodic capacity of mixed Málaga-ℳ/κ-μ shadowed fading FSO transmission system under heterodyne and IM/DD detection techniques with pointing errors taken into account is given for
* Integer m, μ, with m ⩾μ as
C=ξ^2 A r μ_rB^-r/2 ln(2)Γ(α)∑_k=1^βb_k/Γ(k)∑_l=1^mχ_l/Γ(m) T(θ_2,l,m),
where B=αβ h (g+Ω)/[(gβ+Ω)], and,
χ_l = {[ mlθ_2^l -m-μlθ_1^l for 1⩽ l⩽ m-μ,; mlθ_2^l for l> m-μ,; ].
with θ_1=γ̅_̅2̅/μ(1+κ) , and, θ_2=γ̅_̅2̅(μκ+m)/μ m (1+κ). Moreover in (<ref>),
T(x,y,z)= H_10,43,11^01,14,11[μ_r/B^r; x| (-y,1,1)- | (σ,Σ) (ϕ,Φ)| (1-z,1)(0,1)],
where H[·,·] denotes the Fox-H function (FHF) of two variables <cit.> also known as the bivariate FHF whose Mathematica implementation may be found in <cit.>, whereby (σ,Σ)= (1-r,r),(1-ξ^2-r,r),(1-α-r,r),(1-k-r,r) and (ϕ,Φ)=(0,1),(-ξ^2-r,r),(-r,r). Moreover, it becomes for
* Integer m, μ, with m < μ as
C = ξ^2 A r μ_r/2 ln(2)Γ(α)B^r∑_k=1^βb_k/Γ(k)(p,q)≠ (0,0)∑_p=0^m∑_q=0^μ-mmpμ-mq
θ_2^pθ_1^q(∑_i=1^μ-mΔ_1i/Γ(μ-m-i+1)T(θ_1,q+p,μ-m-i+1)
-∑_i=1^mΔ_2i/Γ(m-i+1)T(θ_2,q+p,m-i+1)),
where Δ_1i=(-1)^mm+i-2i-1(m/μκ+m)^m(μκ/μκ+m)^-m-i+1 and Δ_2i=(-1)^i-1μ-m+i-2i-1(m/μκ+m)^i-1(μκ/μκ+m)^m-μ-i+1.
Capitalizing on (<ref>) and recalling the fact that the FSO link's CMGF M^(c)_γ_1(s)=ℒ(F^(c)_γ_1(x)) where ℒ denotes the Laplace transform operator and the FSO link's CCDF is obtained as
F^(c)_γ_1(x) = F^(c)_I( A_0 h (g+Ω) ( x/μ_r)^1/r)
(a)= ξ^2A/Γ(α)∑_k=1^βb_k/Γ(k) G_2,4^4,0[B (x/μ_r)^1/r| ξ^2+1,1 0,ξ^2,α,k].
where (a) follows from integrating (<ref>) using <cit.>. Then, expressing the Meijer-G function in (<ref>) in terms of Fox-H function by means of <cit.> and resorting to <cit.> with some additional manipulations using <cit.> yield
M^(c)_γ_1(s)=ξ^2A r μ_r/Γ(α)B^r∑_k=1^βb_k/Γ(k) H_4,3^1,4[μ_r/B^r s| (σ,Σ) (ϕ,Φ)],
where H_p, q^m, n [·] is the Fox-H function <cit.>.
On the RF side, the CMGF of γ_2 under shadowed κ-μ fading is given by
M^(c)_γ_2(s)=1-M_γ_2(s)/s(a)=1-(θ_1 s+1)^m-μ/(θ_2 s+1)^m /s,
where (a) follows from the recent result in <cit.>. By assuming integer-valued m and μ, the RF link's CMGF can be rewritten after resorting to the transformation Γ(α)(1+z)^-α= H_1,1^1,1 [ z | (1-α,1) (0,1)] in <cit.> as
M^(c)_γ_2(s)(a)μ⩽ m=∑_l=1^mχ_l s^l-1/Γ(m) H_1,1^1,1[θ_2s| (1-m,1) (0,1)],
and
M^(c)_γ_2(s) (b)μ> m= (p,q)≠ (0,0)∑_p=0^m∑_q=0^μ-mmpμ-mqθ_2^pθ_1^qs^p+q-1
(∑_i=1^μ-mΔ_1i H_1,1^1,1[θ_1 s | (m+i-μ,1) (0,1)]/Γ(μ-m-i+1)-∑_i=1^mΔ_2i H_1,1^1,1[θ_2 s | (i-m,1)(0,1)]/Γ(m-i+1)),
where (a) and (b) follow after applying the binomial expansion and the partial fraction decomposition <cit.>, respectively. Plugging (<ref>), (<ref>) and (<ref>) into (<ref>) and resorting to <cit.> complete the proof.
§.§ Outage Probability
The quality of service (QoS) of the considered mixed FSO/RF system is ensured by keeping the instantaneous end-to-end SNR, γ, above a threshold γ_th. The probability of outage in the mixed FSO/RF relaying setup is expressed as
P_out= Pr[γ<γ_th]= Pr[γ_1 γ_2/γ_1+γ_2+1<γ_th].
Marginalization over γ_1, and letting u=1+γ/γ_th in (<ref>) yield
P_out(γ_th)=1-γ_th∫_1^∞F^(c)_γ_2(γ_th+1+γ_th/u-1)f_γ_1(uγ_th) du,
where F^(c)_γ_2 is the CCDF of γ_2 and f_γ_1 is the PDF of the first-link SNR obtained from deriving (<ref>) with respect to x as
f_γ_1(x)=ξ^2AB^r/Γ(α)μ_r∑_k=1^βb_k/Γ(k) H_1,3^3,0[B^rx/μ_r| (ξ^2+1-r,r) (ξ^2-r,r),(α-r,r),(k-r,r)].
Plugging (<ref>) and the RF link’s CCDF expression recently derived in <cit.> for integer m, μ with m ⩾μ into the above integral and making a Taylor expansion of exponential and power terms, we infer that
P_out(γ_th) = 1-ξ^2AB^rγ_the^-γ_th/θ_2/Γ(α)μ_r∑_k=1^βb_k/Γ(k)
∑_i=0^m-μ∑_j=0^m-i-1∑_p=0^jjpΥ_i/j!θ_2^jγ_th^j-p (γ_th+1)^p×ℐ,
with Υ_i=m-μi(m/μκ+m)^i(μκ/μκ+m)^m-μ-i, and ℐ given by
ℐ = ∑_q=0^∞∑_l=0^∞(-1)^q(p+q)_l/q!l!θ_2^q (γ_th+1)^q∫_1^∞ u^-p-q-l
H_1,3^3,0[B^rγ_th/μ_ru| (ξ^2+1-r,r) (ξ^2-r,r),(α-r,r),(k-r,r)]du.
Substituting (<ref>) into (<ref>) after resorting to <cit.> yields the outage probability of mixed FSO/RF in Málaga-ℳ/κ-μ shadowed fading (μ⩽ m ) environments with pointing errors under both detection techniques as
P_out(γ_th)=1-ξ^2AB^rγ_th/e^γ_th/θ_2Γ(α)μ_r∑_k=1^βb_k/Γ(k)∑_i=0^m-μ∑_j=0^m-i-1Υ_i/θ_2^jΞ(θ_2),
where
Ξ(x)=∑_p=0^j∑_q=0^∞∑_l=0^∞(-1)^q(p+q)_ljpγ_th^j-p/j!q!l!x^q(γ_th+1)^-p-q H_2,4^4,0[B^rγ_th/μ_r| (σ_1,Σ_1) (ϕ_1,Φ_1)],
with (a)_n standing for the Pochhammer symbol <cit.>, (σ_1,Σ_1) =(ξ^2+1-r,r),(l+p+q,1), and (ϕ_1,Φ_1)=(l+p+q-1,1),(ξ^2-r,r),(α-r,r),(k-r,r).
Similar to (<ref>) and using <cit.>, the outage probability of mixed FSO Málaga-ℳ/RF shadowed κ-μ (m < μ) is
P_out(γ_th) = 1-ξ^2AB^rγ_th/Γ(α)μ_r∑_k=1^βb_k/Γ(k)
(e^-γ_th/θ_1∑_i=1^μ-m∑_j=0^μ-m-iΔ_1i/θ_1^jΞ(θ_1)+e^-γ_th/θ_2∑_i=1^m∑_j=0^m-iΔ_2i/θ_2^jΞ(θ_2)).
§ ASYMPTOTIC ANALYSIS
To gain more insights into the effect of turbulence/fading parameters on both the ergodic capacity and the outage probability, we study hereafter their asymptotic behaviors. To this end, we invoke the asymptotic expansions of the Fox-H function <cit.> and the Mellin-Barnes integrals involving the bivariate Fox-H function <cit.>.
§.§ Asymptotic Ergodic Capacity
We assume that the average SNR of the RF
link γ̅_2 goes to infinity for a fixed and finite valued average
SNR in the FSO link. Then, resorting to the Mellin-Barnes representation of the bivariate FHF <cit.> in (<ref>), and evaluating the residue at the poles { -m, -1-l } yield the asymptotic capacity, when m ≥μ as
C^∞ = ξ^2 A r μ_r/2 ln(2)Γ(α)B^r∑_k=1^β∑_l=1^mb_kχ_l/Γ(k)( H_5,4^2,5[μ_r/B^r θ_2| (-l,1),(σ,Σ)(m-1-l,1),(ϕ,Φ)]/θ_2^1+lΓ(m)
+ θ_2^-m H_5,3^1,5[μ_r/B^r| (m-l,1),(σ,Σ) (ϕ,Φ)]).
It is worth noting that (<ref>) is much easier and
faster to calculate than the exact capacity in (<ref>).
Moreover, C^∞ when m<μ follows in the same line of (<ref>) while considering (<ref>).
§.§ Asymptotic Outage Probability
At high SNR values, the outage probability of the mixed FSO/RF relaying system can
be expressed as P_out≃ (G_c SNR )^-G_d, where G_c and G_d
denote the
coding gain and the diversity order of the system, respectively. Hence, as μ_r →∞ while keeping the low-order terms in (<ref>), i.e. q+l < 1, and then applying <cit.> yield the asymptotic CDF when m ≥μ as
P_out^∞ = 1-ξ^2Ae^-γ_th/θ_2/Γ(α)∑_k=1^βb_k/Γ(k)∑_i=0^m-μ∑_j=0^m-i-1∑_p=0^jjpΥ_iγ_th^j-pθ_2^-j/j!(γ_th+1)^-p
∑_t=1^41/Φ_1t∏_s ≠ ts=1^4Γ(ϕ_1s-ϕ_1tΦ_1s/Φ_1t)/∏_s=1^2Γ(σ_1s-ϕ_1tΣ_1s/Φ_1t)(B^rγ_th/μ_r)^ϕ_1t/Φ_1t+1.
Compared
to (<ref>) which is expressed in terms of Fox-H
function, (<ref>) includes only finite summations of elementary
functions. The diversity gain of the studied system over atmospheric turbulence conditions is inferred after applying e^-γ_th/θ_2γ̅_2≫ 1≈1-γ_th/θ_2 to (<ref>) as G_d=min{μ, ξ^2/r,α/r,β/r}.
For μ=m, κ→0, g=0 and Ω=1, the CDF in (<ref>) reduces to P_out^∞ for 𝒢-𝒢/Nakagami-m fading
channels as
P_out^∞ = 1-ξ^2e^-mγ_th/γ̅_2/Γ(α)Γ(β)∑_j=0^m-1∑_p=0^j∑_t=1^4jpm^jγ_th^-p(αβ h)^r(ϕ_1t/Φ_1t+1)/j!Φ_1t(γ_th+1)^-p
∏_s ≠ ts=1^4Γ(ϕ_1s-ϕ_1tΦ_1s/Φ_1t)/∏_s=1^2Γ(σ_1s-ϕ_1tΣ_1s/Φ_1t)(γ_th/μ_r)^ϕ_1t/Φ_1t+1(γ_th/γ̅_2)^j,
thereby inferring <cit.>, i.e., G_d=min{m, ξ^2/r,α/r,β/r}.
Similar to (<ref>) while considering (<ref>), the asymptotic outage probability can be derived in
closed form when m< μ. However, the derived expression is omitted due to
space limitations.
§ NUMERICAL RESULTS
Fig.<ref> investigates the impacts of the turbulence-induced fading and pointing errors on the system performance when the RF link is subject to Rician shadowed fading distribution (κ=5, μ=1, m=2). As expected, the ergodic capacity deteriorates by decreasing the pointing error displacement standard deviation, i.e., for smaller ξ, or decreasing the turbulence fading parameter, i.e., smaller α and β, where we associate the strong turbulence to (α,β)=(2.29,2) and the moderate turbulence to (α,β)=(4.2,3). At high SNR, the asymptotic expansion in (<ref>) matches very well its exact counterpart, which confirms the validity of our mathematical analysis for different parameter settings.
Fig.<ref> depicts the outage probability of mixed FSO/RF relay systems in Málaga-ℳ and shadowed κ-μ fading channels for both heterodyne and IM/DD detection at the relay. Throughout our
numerical experiments, we found out that regardless of the average SNRs and turbulence/fading settings, accurate analytical curves can be obtained by truncating the infinite sums at q=10 and l=5 terms. In the legend, please note that we have identified some particular turbulence and fading distribution cases that simply stem from the general Málaga amd κ-μ shadowed fading scenarios, respectively. The exact match with Monte-Carlo simulation results confirms the precision of the theoretical analysis of section III.B. Moreover, we notice that the exact and asymptotic expansion in (<ref>) agree very well at high SNRs.
§ CONCLUSION
We have presented a unified analytical framework for relay-assisted mixed FSO/RF systems that remarkably accommodates generic turbulence/fading models including Málaga-ℳ with pointing errors and shadowed κ-μ distribution that account for shadowed LOS and NLOS scenarios. The results demonstrate the unification of various FSO turbulent/RF fading scenarios into a single closed-form expression for the ergodic capacity and the outage probability while accounting for both IM/DD and heterodyne detection techniques at the relay.
99
Yang2F. Yang, J. Cheng, and T. Tsiftsis, "Free-space optical communication with nonzero boresight pointing errors," IEEE Trans. Commun., vol. 62, no. 2, pp. 713–725, Feb. 2014.
yangP. V. Trinh, T. Cong Thang and A. T. Pham, "Mixed mmWave RF/FSO relaying systems over generalized fading channels with pointing errors," IEEE Photon. Jour., vol. 9, no. 1, pp. 1-14, Feb. 2017
emmna E.S.-Nasab, and M. Uysal, "Generalized performance analysis of mixed
RF/FSO cooperative systems", IEEE Trans. Wirel. Commun., vol. 15, no. 1, pp. 714-727, Jan. 2016.
zediniE. Zedini, H. Soury, and M-S. Alouini, "On the performance analysis of dual-hop mixed FSO/RF systems", IEEE Trans. Wirel. Commun., vol. 15, no. 5, pp. 3679-3689, May 2016.
ansariI.S. Ansari, F. Yilmaz, and M-S. Alouini, "Performance analysis of free-space optical links over Málaga M-turbulence channels with pointing errors", IEEE Trans. Wirel. Commun., vol. 15, pp. 91-102, Jan. 2016.
navas2 G. Balsells, J. María, et al, "Novel formulation of the m model through the Generalized-K distribution for atmospheric optical channels." Optics express, pp. 6345-6358, 2015.
paris J.F. Paris, "Statistical characterization of κ-μ shadowed fading", IEEE Trans. Vehic. Techno., vol. 63, no. 2, pp. 518-526, Feb. 2014.
Cotton S.L. Cotton, "Human body shadowing in cellular device-to-device communications: channel modeling using the shadowed κ -μ fading model", IEEE Selec. Areas Commun., vol. 33, no. 1, pp. 111-119, Jan. 2015.
grad I.S. Gradshteyn and I.M. Ryzhik, Table of Integrals, Series and
Products, 5th ed., Academic Publisher, 1994.
imen2 I. Trigui, S. Affes, and A. Stéphenne, "Capacity scaling laws in interference-limited multiple-antenna AF relay networks with user scheduling", IEEE Trans. Commun., vol. 64, pp. 3284-3295, Aug. 2016.
LeiH. Lei, et al, "Secrecy capacity analysis over α-μ fading channels," IEEE Commun Let, no.99, pp.1-4, Feb. 2017.
mathaiA.M. Mathai, R.K. Saxena, and H.J. Haubol, The H-function: Theory and Applications, Springer Science & Business Media, 2009.
paris4F.J. Lopez-Martinez, J.F. Paris, J.M. Romero-Jerez, "The κ–μ shadowed fading model with integer fading parameters," IEEE Trans. Vehic. Techno., DOI 10.1109/TVT.2017.2678430, 2017.
mittalP. Mittal and K. Gupta, "An integral involving generalized function of two variables", Proc. Ind. Acad. Sci., vol. 75, no. 3, pp. 117–123, 1972.
kilbas A. Kilbas and M. Saigo, H-Transforms: Theory and Applications, 2004 CRC Press.
|
http://arxiv.org/abs/1701.07470v2 | 20170125201202 | Decidability, Complexity, and Expressiveness of First-Order Logic Over the Subword Ordering | [
"Simon Halfon",
"Philippe Schnoebelen",
"Georg Zetzsche"
] | cs.LO | [
"cs.LO",
"cs.FL"
] |
First-Order Logic over the Subword Ordering]Decidability, Complexity, and Expressiveness of First-Order Logic Over
the Subword Ordering
halfon@lsv.fr
phs@lsv.fr
The third author is supported by a fellowship within the
Postdoc-Program of the German Academic Exchange Service (DAAD) and by Labex
DigiCosme, Univ. Paris-Saclay, project VERICONISS
zetzsche@lsv.fr
[
Georg Zetzsche
December 30, 2023
=====================
We consider first-order logic over the subword ordering on finite
words where each word is available as a constant. Our first
result is that the Σ_1 theory is undecidable (already over
two letters).
We investigate the decidability border by considering fragments where
all but a certain number of variables are alternation bounded, meaning
that the variable must always be quantified over languages with a
bounded number of letter alternations. We prove that when at most two
variables are not alternation bounded, the Σ_1 fragment is
decidable, and that it becomes undecidable when three variables are
not alternation bounded. Regarding higher quantifier alternation
depths, we prove that the Σ_2 fragment is undecidable already
for one variable without alternation bound and that when all variables
are alternation bounded, the entire first-order theory is decidable.
§ INTRODUCTION
A subsequence of a (finite) sequence u is a
sequence obtained from u by removing any number of elements. For
example, if u=(, , , , ) then u'=(,
,) is a subsequence of u, a fact we denote with u'
u. Other examples that work for any u are u u (remove
nothing) and () u. In the rest of this paper, we shall use
the terminology from formal methods and will speak of words and
their subwords rather than finite sequences.
Reasoning about subwords occurs prominently in many areas of computer
science, e.g., in pattern matching (of texts, of DNA strings, etc.),
in coding theory, in theorem proving, in algorithmics, etc. Closer to
our own motivations, the automatic verification of unreliable channel
systems and related problems involves the subword ordering or some of
its
variants <cit.>.
Our experience is that reasoning about subwords and related concepts
(e.g., shuffles of words) involves ad hoc
techniques quite unlike the standard tools that work well with
prefixes and suffixes <cit.>.
§.§ The logic of subwords
In this paper we consider the first-order logic (A^*,) of
words over some alphabet A={,,,…} equipped with the subword relation .
Our main objective is to understand how and when one can decide
queries formulated in this logic, or decide whether a given formula is
valid.
For example, we consider formulas like
@true
*ϕ_1:∀ u,u',u”:
u u' u' u” u u” ,
*ϕ_2:∃ u: u
u
u
,
*ϕ_3:∀ u, v: ∃ s:
([ u s v s; ∀ t: u t v t s t ])
.
Here φ_1 states that the subword relation is transitive
(which it is).
More interesting is φ_2, stating that it is possible that
a word contains both and as
subwords but not . This formula is true and,
beyond knowing its validity, one is also interested in
solutions: can we design a constraint solver that will produce
a witness, e.g., u=bcdeabcd, or more generally the set of
solutions?
Our third example, φ_3, states that words ordered by
subwords are an upper semilattice. This is a more complex formula with
Π_3 quantifier alternation. It is not valid in general (e.g.,
and have no lub) but this depends on the
alphabet A at hand: φ_3 holds if A is a singleton alphabet, i.e.,
{}^*φ_3 but {,}^*φ_3.
We say that formulas like φ_1 or φ_3 where
constants from A^* do not appear are in the pure fragment.
Formally, there are two logics at hand here. The pure logic is the
logic of the purely relational structure A
while the extended logic is over the expansion
A where there is a constant
symbol w_i for every word in A^*.
As we just illustrated with φ_3, the validity of a
formula may depend on the underlying alphabet even for the pure
fragment. We note that this phenomenon is not limited to the
degenerate case of singleton alphabets. Indeed, observe that it is
possible to state that u is a letter, i.e., is a word of length 1,
in the pure fragment:
∃ z:∀ x: z x
(x u(u x x z))
.
Thus, even in the pure fragment, one can state that A contains
2, 3, …, or exactly n letters. Similarly one can state that A is infinite by
saying that no word contains all letters.
§.§ State of the art
Relatively little is known about deciding the validity of A
formulas and about algorithms for computing their solutions. By comparison, it
is well known that the Σ_2-theory of (A^*,·), the
logic of strings with concatenation, is
undecidable <cit.>, and that its
Σ_1 fragment (aka “word equations”) is decidable in
<cit.>. Moreover, introducing counting
predicates leads to an undecidable Σ_1 fragment <cit.>.
Regarding the logic of subwords, Comon and
Treinen showed undecidability for an extended logic
(A^*,,p_) where A={,,} has three
letters and p_ is a unary function that prepends
in front of a word, hence is a restricted form of
concatenation <cit.>. Kuske showed that, when only
the subword predicate is allowed, the logic A is
undecidable and already its Σ_3 fragment is undecidable when
|A|≥ 2. Kudinov et al. considered definability in
A and showed that the predicates definable in
A are exactly the arithmetical
predicates[Those
that are invariant under the automorphisms of the
structure] <cit.>.
Kuske's result on the Σ_3 theory leaves open the question
whether smaller fragments are decidable. Karandikar and Schnoebelen
showed that the Σ_2 theory is undecidable <cit.> and
this is tight since the Σ_1 fragment is decidable, in fact
-complete <cit.>.
Karandikar and Schnoebelen
also showed that the two-variable fragment ^2(A^*,) is
decidable <cit.> and that it has an elementary complexity
upper bound <cit.>. Decidability extends to
the logic ^2(A^*,,R_1,R_2,…) where arbitrary regular languages
(monadic predicates) are allowed.
§.§ Objectives of this paper
We are interested in solving constraints built with the subword
ordering. This corresponds to the Σ_1 fragment but beyond
deciding validity, we are interested in computing sets of solutions: a
formula like φ_2 can be seen as a conjunctive set of
constraints, “x
x x” that define a set of words (a
set of tuples when there are several free variables).
A first difficulty is that Kuske's decidability result for the
Σ_1 fragment only applies to the pure fragment, where constants
are not allowed. That is, we know how to decide the validity of
formulas like φ_1 but not like φ_2.
However, using constants inside constraints is natural and
convenient. In particular, it makes it easy to express
piecewise testable constraints (see below), and
we would like to generalise Kuske's result to the extended logic.
We note that, in principle, the difference between the pure and the
extended logic is only superficial since, up to automorphisms,
arbitrary words can be
defined in the logic,[This is a common situation,
shared with, e.g., (A^*,·) and (,<).]
see <cit.>. However this requires
some universal quantification (even when defining the empty word) that are not allowed when restricting to the
Σ_1 fragment. So this avenue is closed.
In
particular, it allows expressing bounded quantification. For example,
we usually write bounded quantifications like
∀ u∈^+^*: ⋯
instead of the equivalent
∀ u:
ab u bbucbubau⋯
We note that any unary predicate u∈ L for L a piecewise testable
language (or a “PT language”, see definition in
Section <ref>) can be expressed as above by requiring and
forbidding some constant subwords. As a consequence, we shall freely
use membership in piecewise testable languages as abbreviations.
§.§ Summary of results
Our first result is that, when constants are allowed, the
Σ_1 fragment of A is actually
undecidable.
In fact the Σ_1 fragment of [W]A, where a single
constant W∈ A^* can be named, is undecidable unless W is too
simple.
These results hold as soon as A contains two distinct letters
and exhibit a sharp contrast between the pure and the extended logic.
We found this very surprising because, before hitting on undecidability,
we had already developed algorithms that solve large classes of Σ_1 constraints.
Our second result identifies a key factor influencing decidability: it
turns out that free variables ranging over a “thin”
language like L=^+^*, are easier to handle than
variables ranging over a “wide”
language like L'=(+)^*. The key difference is that a thin language
only allows a bounded number of letter changes (in L we have 's, then
's, then 's) while a wide language contains words with arbitrarily
many alternations between distinct letters.
These observations lead to a new descriptive complexity measure for
the formulas in A. The associated fragments, denoted ij
for i,j∈, consist of all Σ_i formulas where j variables, say
x_1,x_2,…,x_j can be used without any restrictions, while all the other
variables must be restricted with respect to letter alternations, say using
x∈ (^*_1^*_2⋯^*_n)^ℓ for some ℓ∈ and assuming that
_1,…,_n is a fixed enumeration of A. In computer-aided
verification, such bounded quantifications occur in the analysis of bounded
context-switching protocols.
Within this classification framework, we can delineate a precise
undecidability landscape. The 12 fragment is decidable
while 13 is undecidable even for |A|=2. The
20 fragment is decidable while 21 is not. In
fact, when all variables are alternation bounded, the entire
first-order theory is decidable.
The computational complexity of all mentioned fragments is summarized
in <ref>. Note that, in this table, n denotes the
n-th level of the weak hierarchy, which lies between and
<cit.>.
Finally, we offer a series of expressiveness results showing how various
predicates like concatenation or length function can, or cannot, be defined in
the ij fragments. As demonstrated in the paper, expressiveness
results are crucial to obtain hardness results. Beyond their theoretical
interest, and since pinning down precise properties of words is not easy when
only the subword ordering is available, these results provide a welcome
intermediate language for defining more complex formulas.
§.§ Related work
We already mentioned works on the logic of concatenation, or the
two-variable fragment ^2(A^*,). Because undecidability
appears so easily when reasoning about words, the focus is often on
restricted fragments, typically Σ_1, aka “constraint
solving”. Decision methods for constraints over words have been
considered in several contexts but this usually does not include the
subsequence predicate: these works rather consider the prefix
ordering, and/or membership in a regular language, and/or functions
for taking contiguous subsequences or computing the length of
sequences, see, e.g., <cit.>.
§.§ Outline of the paper
We provide in <Ref> the basic definitions and results
necessary for our later developments. Then we show the undecidability of
the Σ_1 fragment
(<Ref>) before focusing on the decidable fragments
(<Ref>). Finally, in <Ref>, we turn to
expressiveness questions.
§ SUBWORDS AND THEIR LOGICS
We consider finite words ,,... over a
given finite alphabet A of letters like ,,….
Concatenation of words is written multiplicatively, with the
empty word ϵ as unit. We freely use regular expressions like
()^*+()^* to denote regular languages.
The length of a word is written || while, for a letter ∈
A, ||_ denotes the number of occurrences of in . The set
of all words over A is written A^*.
A word is a factor of if there exist words _1 and
_2 such that = _1 _2. If furthermore _1=ϵ then
is a prefix of ,
while if _2=ϵ then is a suffix.
§.§ Subwords.
We say that a word is a subword (i.e., a subsequence) of
, written , when is some _1⋯_n and
can be written as _0 _1 _1 ⋯_n _n for some
_0,_1,…,_n∈ A^*, e.g., ϵ.
We write for the
associated strict ordering, where ≠. Two words and are
incomparable (with respect to the subword relation), denoted ⊥, if and .
Factors are a special case of subwords.
With any ∈ A^* we associate its upward closure , given by
{∈ A^* | }. For example,
ab=A^* A^* A^*. The definition of involves an implicit alphabet
A that will always be clear from the context.
§.§ Piecewise testable languages
Piecewise testable languages (abbreviated PT) constitute a subvariety of the
languages of dot-depth one, themselves a subvariety of the star-free languages,
which are a subvariety of the regular languages <cit.>.
Among the several characterizations of PT languages, the most
convenient for our purposes is the following one: L⊆ A^* is
PT if, and only if, it is a boolean combination of languages of the
form for some ∈ A^*. Thus the PT languages are exactly
the monadic predicates that can be defined by a boolean combination of
constraints of the form _i x and/or _jx, or
equivalently by a quantifier-free φ_L(x) formula in the
A logic. For example, the solutions of ϕ_2 (from
the introduction) form a PT language.
In the following, we often write “x∈ L”, where L is a given PT
language, as an abbreviation for ϕ_L(x), with the understanding
that this is a Σ_0 formula.
§.§ Logic of subwords
Let V be the set of variables with typical elements
x,y,…,u,v,….
For a
first-order logic formula φ over a structure with domain D, we denote
by φ⊆ D^V the set of satisfying
assignments, with typical elements α,β,….
If φ has only one free variable, say x, and there is no danger of
confusion, we sometimes write φ to mean {α(x) |α∈φ}.
Moreover, φ denotes the set of free variables in φ.
By A, we denote the first-order logic over the structure
A. In contrast, A is the first-order logic over the
structure A, where for each word ∈ A^*, the signature
provides a constant symbol. In the case of A and A,
assignments are members of (A^*)^V. We will sometimes write to denote the
assignment that maps every variable to the word ∈ A^*. Moreover,
(x↦) denotes the assignment in (A^*)^{x} that maps x to .
§.§ Bounding alternations
We define a fragment of first-order logic over the relational structure A. Let A={_1,…,_n}. The starting point for
introducing the fragments ij is the observation that if
every variable in a sentence φ is introduced by a restricted quantifier
of the form ∃ x∈(_1^*⋯_n^*)^ℓ or ∀
x∈(_1^*⋯_n^*)^ℓ for
some ℓ∈, then one can reduce the truth problem
of φ to Presburger arithmetic. Note that the language (_1^*⋯_n^*)^ℓ is PT, implying that such restrictions, which we
call alternation bounds, can be imposed within A
and without any additional quantifiers. This raises
the question of how many variables without alternation bound one can allow
without losing decidability.
In essence ij contains all formulas in the Σ_i
fragment with j
variables without alternation bound. A formalization of this for sentences
could just be a syntactic restriction: Every quantifier for all but at most j
variables must be relative to some (_1^*⋯_n^*)^ℓ. However, this
would not restrict free variables, which we need in order to build complex ij
formulas from predicates defined in ij.
Formally, a formula with alternation
bounds consists of a formula φ of A and a function
ℓ V→∪{∞}, which specifies the alternation bounds. This
means, the semantics (φ,ℓ) of (φ,ℓ) is
defined as φ̃, where φ̃ is defined
as follows. First, we replace every quantifier 𝒬x
(𝒬∈{∃,∀}) in φ by the relativized 𝒬x∈ (_1^*⋯_n^*)^ℓ(x). Then we add the conjunction
⋀_x∈φ, ℓ(x)<∞ x∈ (_1^*⋯_n^*)^ℓ(x) for the free variables.
The fragment ij consists of those formulas with alternation bounds
(φ,ℓ) where φ belongs to the Σ_i fragment of
A and has at most j variables x∈ V with ℓ(x)=∞.
We will always represent a formula in ij by its Σ_i formula
and the function ℓ will be clear from the context.
Variables x∈ V with ℓ(x)<∞ will be called alternation bounded,
the others alternation unbounded.
In order to permit a polynomial translation into an equivalent formula in
ordinary A, the alternation bounds are always encoded in unary.
The fragment ij is defined similarly, with Π_i instead of Σ_i.
Sometimes we define predicates that are satisfied for words with unbounded
alternations (such as “u∈{,}^*” when A={,,}), but want to use
the corresponding formula in a context where the variables are alternation
bounded (“u∈{,}^* ∧u”). In that situation, we
want to record the number of alternation unbounded variables we need for the
definition, disregarding the free variables. Hence, ij
denotes those formulas with alternation bound in Σ_i, where there are at
most j quantified variables without alternation bound. The semantics
is defined as for ij. The fragment ij is defined with
Π_i instead of Σ_i.
§ UNDECIDABILITY
§.§ The 13 fragment
We begin with our main result, the undecidability of the Σ_1 theory of
A for |A|≥ 2. In fact, we will even prove undecidability for
the 13 fragment. We need a few ingredients. A word ∈ A^+ is
called primitive if there is no ∈ A^+, ||<||, with ∈^*.
The following is a well-known basic fact from word combinatorics (see e.g.
<cit.>)
If ∈ A^+ is primitive, then = is equivalent to ∈^*.
We also use the following version of the fact that Diophantine sets
are precisely the recursively enumerable sets <cit.>.
Let S⊆ be a recursively enumerable set. Then there is a finite set
of variables {x_0,…,x_m} and a finite set E of equations, each of
the form
x_i=x_j+x_k x_i=x_j· x_k x_i=1
with i,j,k∈ [0,m], such that
S={y_0 ∈|∃ y_1,…,y_m∈(y_0,…,y_m) satisfies E}.
We are now ready to prove our main result.
Let |A|≥ 2 and a∈ A. For each recursively enumerable set
S⊆, there is a 13 formula φ over the structure
A with one free variable such that φ={a^k
| k∈ S}.
We show how to express some basic properties of words and combine these to
build more complex predicates, all the time keeping track of what fragments are
involved. Here, we always use u,v,w as the free variables of the formula we
currently construct.
Recall that for every PT language L⊆ A^*, we can express “u∈
L” in 00: we will use this silently, mainly for
languages of the form r a^* s where a is a letter and r,s are
two words.[That any language of the form ra^*s is PT is
easy to prove, e.g., using the characterization of <cit.>.]
Note also that, since “u∈(a+b)^*” can be expressed in 00 for a,b∈ A,
it suffices to prove the theorem in the case |A|=2.
* We can express “|u|_a<|v|_a” in 10:
∃ x∈ a^* x v ∧ xu.
* We can express “∃ n u=a^n ∧ v=a^n-1b” in 10.
Clearly, it suffices to show that we can express
“∃ n≥ 2 u=a^n ∧ v=a^n-1b”.
Consider the formula:
u∈ a a a^* ∧ v∈ a^*b∧∃ x∈ a^*b a a
|v|_a<|u|_a ∧ vx ∧ u x.
Suppose the formula is satisfied with u=a^n, x=a^ℓ b a a and v=a^m b.
Then |v|_a<|u|_a implies m<n. By vx, we have ℓ<m and
thus ℓ<m<n, hence ℓ+2≤ n. On the other hand, u x implies
n≤ℓ+2 and thus n=ℓ+2 and m=n-1.
Conversely, if u=a^n and v=a^n-1b for some n≥ 2, then the formula is satisfied with
x=a^n-2b a a.
* We can express “u,v∈ (a+b)^*b ∧ |u|_a=|v|_a” in 10:
u,v∈ (a+b)^*
∧ ∃ x ∈ a^*∃ y ∈ a^*b [ ∃ n x=a^n ∧ y=a^n-1b ]
∧
y u ∧ y v ∧ xu ∧ xv.
Suppose the formula is satisfied. Then a^n-1b u and
a^nu imply |u|_a=n-1. Moreover, if u ended in a, then
a^n-1b u would entail a^n u, which is not the case. Since
|u|≥ 1, we therefore have u∈{a,b}^*b. By symmetry, we have
|v|_a=n-1 and v∈{a,b}^*b. Hence, |u|_a=n-1=|v|_a.
If u,v∈{a,b}^*b with |u|_a=|v|_a, then the formula is satisfied with n=|u|_a+1.
* We can express “∃ n u=aaba^nb ∧ v=aba^n+1b∧ w=ba^n+2b” in 10:
u∈ aaba^*b ∧ v∈ aba^*b∧ w∈ ba^*b
∧ [u,v,w∈{a,b}^*b ∧ |u|_a=|v|_a=|w|_a].
* We can express “∃ n u=ba^nb∧ v=ba^n+1b” in 10. It suffices to show that we can
express “∃ n≥ 1 u=ba^nb∧ v=ba^n+1b”. Consider the formula:
∃ x ∈ aaba^*b, y ∈ aba^*b, z∈ ba^*b
[ ∃ m x=aaba^mb ∧ y=aba^m+1b ∧ z=ba^m+2b ]
∧ u,v∈ ba^*b ∧ u y ∧ ux ∧ v z ∧ vy.
Suppose the formula is satisfied for u=b a^kb and v=b a^ℓ b. Then
u y and ux imply k≤ m+1 and k>m, hence k=m+1.
Moreover, v z and vy imply ℓ≤ m+2 and
ℓ>m+1, hence ℓ=m+2. Hence, with n=m+1 we have u=b a^nb and
v=b a^n+1b and n≥ 1.
Conversely, if u=b a^nb and v=b a^n+1b for some n≥ 1, then the formula is satisfied with m=n-1.
* We can express “∃ n u=a^n ∧ v=a^n+1” in 10. For this, it suffices to
express “∃ n≥ 1 u=a^n ∧ v=a^n+1”.
As in <ref>, one verifies correctness of the following:
∃ x,y,z[ ∃ m x=b a^mb ∧ y=b a^m+1b ∧ z=b a^m+2b ]
∧ u,v∈ a^* ∧ u y ∧ ux ∧ v z ∧ vy.
* We can express “v=a^|u|_a” in 10:
∃ x ∈ a^*[ ∃ n v=a^n ∧ x=a^n+1]
∧ v u ∧ xu.
* We can express “|u|_a=|v|_a” in 10:
∃ x x=a^|u|_a∧ x=a^|v|_a.
* For a b, we can express “u∈ a^* ∧ v=bu” in 10:
u∈ a^* ∧ v∈ b a^* ∧ |v|_a=|u|_a.
* For a b, we can express “u∈ a^* ∧ v=ub” in 10:
u∈ a^* ∧ v∈ a^*b ∧ |v|_a=|u|_a.
* We can express “|w|_a=|u|_a+|v|_a” for any a∈ A in 10.
Let b∈ A∖{a}:
∃ x,y∈ a^*∃ z∈ a^*b a^* x=a^|u|_a∧ y=a^|v|_a
∧ xb z ∧ xab z ∧ by z∧ byaz
∧ |w|_a=|z|_a
Note that we can define xb, (xa)b and b(ya) thanks to <ref>.
The constraints in <ref> enforce that z=xby and hence |z|_a=|x|_a+|y|_a=|u|_a+|v|_a.
* For k,n_0,…,n_k∈, a b,
let r_a(a^n_0b a^n_1⋯
b a^n_k)=n_k, which defines a function r_a{a,b}^*→.
We can express “v=a^r_a(u)” in 10:
v∈ a^* ∧ ∃ x∈ b^*a^* ∃ y∈ b^*a^*
|x|_b=|y|_b=|u|_b ∧ |y|_a=|x|_a+1
∧ x u ∧ yu ∧ |v|_a=|x|_a.
Note that |x|_b=|y|_b=|u|_b can be expressed according to
<ref> and |y|_a=|x|_a+1 can be expressed thanks to
<ref>.
Write u=a^n_0b a^n_1⋯ b a^n_k.
Suppose the formula is satisfied. Then |x|_b=|y|_b=|u|_b and |y|_a=|x|_a+1
imply that x=b^ka^m and y=b^ka^m+1 for some m∈. Moreover,
x u implies m≤ n_k and yu implies m+1>n_k, thus
m=n_k. Thus, |v|_a=|x|_a entails |v|_a=n_k.
Conversely, if v=a^n_k, then the formula is satisfied with x=b^ka^n_k and y=b^ka^n_k+1.
* For a∈ A, we can express “v∈ a^* ∧ w=uv” in 10. Let b a and consider the formula
v∈ a^* ∧
∧ ∃ x∈ a^* ∃ y ∈ a^* x=r_a(u) ∧ y=r_a(w)
∧ |w|_b=|u|_b ∧ u w
∧ |y|_a=|x|_a+|v|_a ∧ |w|_a=|u|_a+|v|_a
To show correctness, suppose the formula is satisfied with
u=a^n_0b a^n_1⋯ b a^n_k and w=a^m_0b a^m_1⋯ b a^m_ℓ.
The conditions in <ref>
imply that k=ℓ and w=a^m_0b a^m_1⋯ b a^m_k and n_i≤ m_i for i∈[0,k].
The conditions in <ref> then entail m_k=n_k+|v|_a and
∑_i=0^k m_i=∑_i=0^k n_i+|v|_a, which together is only possible if
m_i=n_i for i∈[0,k-1]. This means we have w=uv.
The converse is clear.
* We can express “u is prefix of v” in 13:
⋀_a∈ A∃ x∃ y∈ a^* x=uy ∧ x v ∧ |x|_a=|v|_a.
Suppose the formula is satisfied. Then uy v for some y∈ A^*,
which implies u v. Let p be the shortest prefix of v with
u p. Observe that whenever uw v, we also have pw
v, because the leftmost embedding of uw in v has to match up u with p.
Now towards a contradiction,
assume |p|>|u|. Then there is some a∈ A with |p|_a>|u|_a. The formula
tells us that for some m∈, we have ua^m v and |u|_a+m=|v|_a.
Our observation yields pa^m v, and hence
|v|_a ≥ |p|_a + m > |u|_a+ m = |v|_a, a contradiction. The converse is clear.
* We can express “w=uv” in 13: Since expressibility is
preserved by mirroring, we can express prefix and suffix by
<ref>. Let and denote the prefix and suffix
relation, respectively. We can use the formula
u w ∧ v w ∧ ⋀_a∈ A |w|_a=|u|_a+|v|_a.
* For a,b∈ A, a b, we can express “u∈ (a b)^*” in
13: By <ref>, we can use the formula
∃ v v=uab ∧ v=abu,
which, according to <ref>, is equivalent to u∈ (a b)^*.
* For a,b∈ A, a b, we can express “|u|_a=|v|_b” in 13 by using
∃ x∈ (a b)^* |u|_a=|x|_a ∧ |v|_b=|x|_b.
* We can express “∃ m,n u=a^n ∧ v=a^m ∧ w=a^m· n” in 13:
u,v,w∈ a^*
∧ ∃ x [∃ y,z y=bu ∧ z=yx ∧ z=xy]
∧ |x|_b = |v|_a ∧ |w|_a=|x|_a.
The conditions in brackets require (bu)x=x(bu). Since bu∈ ba^*
is primitive, this is equivalent to x∈ (bu)^* (cf <ref>).
* We use the fact that every recursively enumerable set of natural numbers
is Diophantine. Applying <ref> to S yields a finite set E of
equations over the variables {x_0,…,x_m}. The formula φ is of
the form
φ≡∃ x_1, x_2, …, x_m ∈ a^*ψ,
where ψ is a conjunction of the following 13 formulas. For each equation x_i=1, we add x_i=a.
For each equation x_i=x_j+x_k, we add a formula expressing |x_i|_a=|x_j|_a + |x_k|_a.
For each equation x_i=x_j· x_k, we add a formula expressing x_i=a^|x_j|·|x_k|.
Then we clearly have φ={a^k | k∈ S}.
As an immediate consequence, one sees that the truth problem is also
undecidable for the Σ_1 fragment of the logic of subwords
without constants but enriched with predicates like “|u|_a=2” for
counting letter occurrences.
It can even be shown that there is a fixed word W∈{a,b}^* such
that the truth problem of 13 over [W]{a,b} is
undecidable.
In order to show undecidability with a single constant, we will need
the fact that each word of length at least 3 is determined by its
length and its strict subwords. For two words u,v∈ A^*, we write
u∼_n v if {u}∩ A^≤ n={v}∩ A^≤ n.
Let n≥ 2 and |u|=|v|=n+1. Then u=v if and only if u∼_n v.
There is a word W∈{a,b}^* such that for every recursively
enumerable set S⊆, there is a 13-formula
τ over the structure [W]{a,b} such that
τ={a^k | k∈ S}.
In particular, the
truth problem for the 13 fragment over
[W]{a,b} is undecidable.
In the proof of <ref>, we have constructed 13 formulas
over A expressing successor, addition, and
multiplication, more precisely: expressing “∃ n≥ 0
u=a^n∧ v=a^n+1” and “∃ m,n≥ 0 u=a^m∧
v=a^n∧ w=a^m+n” and “∃ m,n≥ 0 u=a^n∧
v=a^m∧ w=a^m· n”. Let W_1,…,W_r∈{a,b}^* be
the constants occurring in these three 13 formulas, plus
ε. Let m the maximal length of any of these words, and
let W=a^m+1b^m+2.
Let S⊆ be recursively enumerable. Then, according to
<ref> and by the choice of W_1,…,W_r, there is a
13 formula φ that only uses constants from
W_1,…,W_r and with φ={a^k | k∈ S}.
We shall prove that using W, we can define all the words
W_1,…,W_r. Consider the formula
∃ x_0,y_0,…,x_2m+3,y_2m+3 x_0⋯ x_2m+3 W∧ y_0⋯ y_2m+3 W ∧
∃ x'_0,…,x'_m+1 x'_0⋯ x'_m+1 W ∧
∃ y'_0,…,y'_m+2 y'_0⋯ y'_m+2 W ∧
x_1 y_1 ∧ x_1y'_m+2 ∧ y_1x'_m+1 ∧
∃ z_01 x'_1 z_01 ∧ x'_2z_01 ∧ y'_1 z_01 ∧ y'_2z_01 ∧
∃ z_10 x'_1 z_10 ∧ x'_2z_10 ∧ y'_1 z_10 ∧ y'_2z_10 ∧
z_01 W ∧ z_01 z_10
If it is satisfied, then |x_i|=|y_i|=i for i∈[0,2m+3] and
since x_1 y_1, we have {x_1,y_1}={a,b}. Since
x_1y'_m+2 we get y'_i∈ y_1^* and thus
y'_i=y_1^i for i∈[0,m+2], which is only possible with
y_1=b. This implies x_1=a. In particular, we get
{z_01,z_10}={ab, ba}. Since z_01 W, we
have z_01=ab and z_10=ba. On the other hand, if |x_i|=|y_i|=i for i∈[0,2m+3],
x_1=a, y_1=b,
x'_i=a^i, y'_j=b^j for i∈[0,m+1], j∈[0,m+2], z_01=ab, and z_10=ba,
then the formula is clearly satisfied.
Hence, we can already define all words of length at most 2 and
all words a^i and b^i for i∈[0,m+1]. This lets us define
other predicates.
* For each 0≤ℓ≤ m, we can express
“|u|_a=ℓ” using the formula
a^ℓ u ∧ a^ℓ+1u
Note that since ℓ+1≤ m+1, we can already define
a^ℓ+1. The same way, we can express
“|u|_b=ℓ”.
* For each 0≤ℓ≤ m, we can express “|u|=ℓ”
using the formula
⋁_i+j=ℓ |u|_a=i
∧ |u|_b=j.
* For each word w∈ A^≤ m, |w|>2, we can define
w. We proceed by induction. For |w|≤ 2, we
can already define w. Thus, suppose we can define
every v∈ A^≤ n and let w∈
A^n+1 with n+1≤ m. Consider the formula
|u|=n+1 ∧ ⋀_v∈ A^≤ n, v w
v u ∧ ⋀_v∈ A^≤
n, vw vu.
Clearly, if u=w, then the formula is
satisfied. On the other hand, suppose the formula is
satisfied. It expresses that {u}∩ A^≤ n={w}∩ A^≤ n.
According to <ref>, this
implies u=w.
Note that all the variables we introduced to define the words in
A^≤ m carry words of length at most 2m+3, meaning that
we may assume that they are alternation bounded. Thus, we can
define all words in A^≤ m using forumlas in 10.
Therefore, we can turn φ into a 13 formula
τ that contains W as its only constant and satisfies
τ=φ.
It remains to show the second statement of the
undecidable:single. Let S⊆ be
recursively enumerable but undecidable and let k∈ be
given. We choose the formula φ as above, but we
modify it as follows. Let φ_0≡φ, and for
i∈[1,k], let φ_i express
∃ yφ_i-1(y)∧∃ n x=a^n∧ y=a^n+1.
Finally, let φ_k+1 be the formula ∃ xφ_k(x)∧ x=ε. Note that by the choice of
W_1,…,W_r, we may assume that φ_k+1 contains
only the constants W_1,…,W_r. Note that φ_k+1
has no free variables and is true if and only if k∈ S. Now
τ_k+1 is obtained from φ_k+1 just as τ is
obtained from φ. It follows as above that τ_k+1
is true if and only if k∈ S. This proves the second
statement of the undecidable:single.
Here W must be complex enough: For instance, the Σ_1 fragment of
[ϵ]{a,b} and of [a]{a,b}, respectively, is decidable.
The Σ_1-fragment of [ϵ,a]{a,b} is decidable.
We may assume that the input formula is of the form
φ≡∃ x_1,…,x_nψ, where ψ is a conjunction
of literals of the following forms:
c x cx x c xc
x y xy
where c∈{ε, a} and x∈ X={x_1,…,x_n}. For each literal x c, we can
guess whether x=ϵ or x=a and hence assume that these do not occur. Literals of the form ϵ x are always satisfied, whereas ϵx is never satisfied. Hence, without loss of generality, these do not occur either and we may assume that all literals are of the form
a x ax xϵ xa
x y xy.
Moreover xϵ is equivalent to xϵ and the literal ax is equivalent to x∈ b^*. We can therefore assume that all literals are of the form
a x x∈ b^* xϵ xa
x y xy
Let L⊆ X be the set of those variables for
which we have a x∈ b^* literal. Clearly, x∈ b^* and xϵ together
mean x∈ b^+. In the same way, x∈ b^* and xa together mean x∈ b^+.
Furthermore, a x and x∈ b^* are mutually exclusive. Hence, we can rewrite our constraint system as follows:
* For each x∈ L, we have either a constraint x∈ b^* or x∈ b^+.
* For each x∈ X∖ L, we have a set of constraints of the form a x, xϵ, or xa.
* We have constraints of the form x y and xy.
As a final reformulation step, note that every u∈{a,b}^*
satisfies either u∈ b^* or a u. Therefore, we may
assume that for every x∈ X, we have either a constraint x∈ b^*
or x∈ b^+ (and hence x∈ L) or a x. Notice that if we
already have a x, then xϵ is redundant. Thus, we
have the following constraints:
* For each x∈ L, we have either x∈ b^* or x∈ b^+.
* For each x∈ X∖ L, we have a x and possibly xa.
* A set of constraints of the form x y or xy.
We say that a partial order (X,≤) is compatible if
* L is downward closed and linearly ordered,
* for each constraint x y (xy), we have x≤ y (x≰y).
We claim that φ is satisfied in
[ϵ,a]{a,b} if and only if there is a compatible
partial order on X. Since the latter is clearly decidable, this
implies the decidable-short.
Of course, if φ is satisfied, then the subword ordering
induces a compatible partial order on X. So let us prove the
converse and suppose (X,≤) is a compatible partial order and let
P=X∖ L. Then, (P,≤) is a partial order and we can find
some m≥ 0 such that (P,≤) embeds into the lattice {0,1}^m
of m-tuples over {0,1} with componentwise comparison. Consider
such an embedding with m≥ 2. This embedding allows us to assign
to each x∈ P a word u_x∈ aβ_1⋯ aβ_m, where
β_1,…,β_m∈{ϵ,b} such that x≤ y if and
only if u_x u_y.
Now write L={ℓ_1,…,ℓ_k} with ℓ_1≤⋯≤ℓ_k. We now define a function
f X→{0,…,k}. Note
that for each x∈ P, the set {x}∩ L is a downward
closed subset of L and hence of the form {ℓ_1,…,ℓ_i} for
some i≥ 0. In this case, set f(x)=i. Moreover, let
P_i = {x∈ P |{x}∩ L={ℓ_1,…,ℓ_i}}.
This
allows us to construct an assignment of words v_x to variables
x. Let us explain the intuition. In order ensure that
v_ℓ_1v_x for all x∈ P_0, we let v_x=u_x and
notice that then, the words v_x all contain at most m-many
b's. Hence, we set v_ℓ_i=b^m+1. Now, we have to make sure that
the words for v_x with x∈ P_1 all contain v_ℓ_1 as a
subword, so we pad the words u_x with b's on the left: We set
v_x=b^m+1u_x. Now, in turn, we need to make sure that v_ℓ_2
contains more than (m+1)+m-many b's, leading to
v_ℓ_2=b^2(m+1), and so on. Thus, we set:
v_ℓ_i = b^i(m+1) v_x = b^f(x)· (m+1) u_x
for i∈{1,…,k} and x∈ P. Let us show that this
assignment sastisfies our constraint system.
* Consider a constraint x y. We have x≤ y.
* If x,y∈ P, then {x}∩ L⊆{y}∩ L
and thus f(x)≤ f(y). Since also u_x u_y, we have
v_x v_y.
* If y∈ L, then also x∈ L (since L is downward closed) and thus clearly v_x v_y.
* If y∈ P and x∈ L. Suppose x=y_i and f(y)=j. By definition of f, we have i≤ j
and hence v_x=v_ℓ_i=b^i(m+1) b^j(m+1)u_y=v_y.
* Consider a constraint xy. Then x≰y.
* If x,y∈ P, then u_xu_y by choice of u_x and u_y. In particular, we have
v_x=b^f(x)· (m+1) u_x b^f(y)· (m+1) u_y=v_y.
* If x∈ P and y∈ L, then v_y∈ b^* and a v_x. Thus v_xv_y.
* If x∈ L and y∈ P, say x=ℓ_i. Then x≰y means that ℓ_i∉{y}∩ L and hence f(y)<i.
Note that v_y=b^f(y)(m+1) u_y and that |u_y|_b≤ m. Therefore |v_y|_b≤ f(y)(m+1)+m<i(m+1) and hence
v_x=v_ℓ_i=b^i(m+1)v_y.
* If x,y∈ L, then x=y_i and y=y_j with j<i. Then clearly v_xv_y.
* Constraints x∈ b^* or x∈ b^+ with x∈ L are of course satisfied.
* Constraints xa with x∈ P are satisfied because |u_x|_a≥ m≥ 2 for every x∈ P.
This established our claim and thus the decidable-short.
This raises an interesting question: For which
sets {W_1,W_2,…}⊆ A^* of constants is the truth
problem for Σ_1 sentences over [W_1,W_2,…]A
decidable?
§.§ The 21 fragment
Our next result is that if we allow one more quantifier alternation, then already
one variable without alternation bound is sufficient to prove undecidability.
Let |A|≥ 2 and a∈ A. For each recursively enumerable set
S⊆, there is a 21 formula φ over the structure
A with one free variable such that φ={a^k
| k∈ S}. In particular, the truth problem for 21 is undecidable.
* We can express “|u|_a≤ |v|_a” in 10:
∀ x∈ a^* xu ∨ x v.
Hence, we can express “|u|_a=|v|_a” in 10.
* We can express “|u|_a>|v|_a” in 10: This follows from the fact that
|u|_a≤|v|_a is expressible in 10.
* We can express “|u|_a |v|_a” in 10 according to the previous item.
* We can express “u∈ a^* ∧ v∈ (bu)^*” in 11. It clearly
suffices to express “u∈ a^* ∧ v∈ (bu)^+” in 11. Consider the formula
v∈ b{a,b}^*
∧ ∀ x∈ b^+a^*b^*[|x|_b |v|_b
∨ (|x|_a>|u|_a∧ xv)
∨ (|x|_a≤ |u|_a∧ x v) ].
Note that “v∈ b{a,b}^*” is expressible in 10 because
v∈ a{a,b}^* is expressible in 10 (see <ref> in
the proof of <ref>).
Moreover, notice that since b^*a^*b^*={a,b}^*∖a b a, the
language b^+a^*b^* = (b^*a^*b^*)∩ (b a∪ (b∖a))
is piecewise testable and thus definable in 00.
* We can express “|u|_a=|v|_b” in 21:
∃ x∈ (a b)^* |u|_a=|x|_a ∧ |v|_b=|x|_b.
* We can express “∃ m,n u=a^m ∧ v=a^n ∧ w=a^m· n” in 21:
u∈ a^*∧ v∈ a^*∧ w∈ a^*
∧∃ y∈ b^*∃ x∈ (bu)^*
|x|_b=|y|_b ∧ |y|_b=|v|_a∧ |x|_a=|w|_a.
Note that we employ the variable y because directly expressing |x|_b=|v|_a
(using the previous item) would require an additional alternation unbounded
variable besides x, but we can only use one.
* Recall that “|w|_a=|u|_a+|v|_a” is expressible in 10 (see
<ref> of <ref>) and hence “∃ m,n
u=a^m ∧ v=a^n∧ w=a^m+n” in 10. Thus, we can implement
Diophantine equations as in the proof of <ref>.
§.§ The Σ_2 fragment over two letters in the pure logic
The final result in this section settles the question of how many letters are
needed to make the Σ_2 fragment of A undecidable. We show
here that two letters suffice. Observe that if |A|=1, A can be
interpreted in (, <) and is thus decidable.
Let μ A^*→ A^* be a map. It is called a morphism if
μ(uv)=μ(u)μ(v) for all u,v∈ A^*. It is an
anti-morphism if μ(uv)=μ(v)μ(u) for all u,v∈ A^*.
Finally, it is an automorphism of (A^*,) if for any
u,v∈ A^*, we have u v if and only if
μ(u)μ(v).
Note that if we have no constants, we cannot define a language in
a^*, because all definable subsets are closed under automorphisms of
(A^*,). It will be useful for the next proof to
have a classification of all automorphisms of (A^*,). The
following is shown implicitly by Kudinov et. al. <cit.>,
but we include a short proof for completeness.
A map μ is an automorphism of (A^*,) if and only if
* μ is either a morphism or an
anti-morphism and
* μ permutes A.
Clearly, maps as described in the automorphisms are
automorphisms. Assume μ is an automorphism. Since μ has to
preserve the minimal element, we have
μ(ε)=ε. It also has to preserve the set of
minimal elements of A^*∖{ε}, hence the set
A. Repeating this argument yields that μ has to preserve
length. Therefore, according to <ref>, if μ is
identical on A^≤ n for n≥ 2, then it is the identity on
A^≤ n+1. By induction, this implies that if an automorphism is
identical on A^≤ 2, then it is the identity on A^*. Hence,
if two automorphisms agree on A^≤ 2, then they are the
same. It therefore suffices to show that every automorphism μ
agrees on A^≤ 2 with a map as described in the
automorphisms.
Since μ preserves the set A, the map π=μ|_A is a
permutation of A. Moreover, for any a,b∈ A, we have
μ(ab)=μ(a)μ(b) or μ(ab)=μ(b)μ(a).
If μ(ab)=μ(a)μ(b), then we cannot have
μ(bc)=μ(c)μ(b), because the two words ab and bc have
only one common upper bound of length three (namely abc), whereas
the words μ(a)μ(b) and μ(c)μ(b) have two, namely
μ(c)μ(a)μ(b) and μ(a)μ(c)μ(b). Therefore, if
μ(ab)=μ(a)μ(b), then μ(bc)=μ(b)μ(c). In particular,
if μ(ab)=μ(a)μ(b), then we have μ(cd)=μ(c)μ(d) for
all c,d∈ A. Hence on A^≤ 2, μ agrees with a map as
decribed.
Let |A|≥ 2. For each recursively enumerable set S⊆, there is a
Σ_2 formula τ over the structure A that defines the language
τ={a^k| a∈ A, k∈ S}.
In particular, the truth
problem for Σ_2 is undecidable.
Fix a letter a∈ A. Let S⊆ be recursively enumerable, let
φ be the 13 formula provided by <ref>
with one free variable x and with φ={a^k | k∈
S}, and let w_1,…,w_m∈ A^* be the constants used in the formula
φ.
It was shown in <cit.> that from w_1,…,w_m, one can construct
a Σ_2 formula ψ over A with free variables
V={x_1,…,x_m} such that for α∈ (A^*)^V, we have
α∈ψ if and only if there is an automorphism ·
A^*→ A^* such that α(x_i)=w_i for every i∈[1,m].
Let φ' be the formula obtained from φ by replacing
every occurrence of w_i with x_i. Moreover, let τ≡∃
z_1,…,z_rψ∧φ'.
Then, τ is clearly a Σ_2 formula and has exactly one free variable, say x.
We claim that
τ={b^k | b∈ A, k∈ S}.
If k∈ S, then a^k∈φ and hence clearly
b^k∈τ for each b∈ A. Moreover, if
w∈τ, then for some α∈ψ,
we have α_wφ', where α_w denotes the
assignment with α_w|_V=α and α(x)=w. This means,
there is an automorphism · of (A^*,) such
that α(x_i)=w_i for i∈[1,m]. Therefore, there is
some w'∈ A^* that satisfies φ such that
w=w'. In particular, w'=a^k for some k∈ S and hence
w=b^k for some b∈ A. This proves <ref>.
We can now proceed as
in <ref> to show undecidability of the truth
problem.
§ COMPLEXITY
In this section, we study the complexity of the truth problem for the
ij fragments of A.
§.§ Complexity of i0
We begin with the case j=0. In the
following, n denotes the n-th level of the weak
hierarchy <cit.>.
If |A|≥ 2, then the truth problem for i0 is
-complete for i=1 and i-1-complete for i>1.
We provide a polynomial inter-reduction with the Σ_i
fragment of
(,0,1,+,<), a.k.a.
Presburger Arithmetic (PA), for which Haase <cit.> has
recently proven i-1-completeness for i>1. The Σ_1 fragment of PA is -complete <cit.>.
The reduction from PA to
10 fixes a letter a∈ A and encodes every number k∈ by
a^k. Addition can then be expressed in 10 (<ref> of
<ref>). Note that although this ostensibly works with one
letter, we need another letter in A to express addition. This is crucial: If
|A|=1, then A is just (,<), which has a -complete
truth problem <cit.>. Moreover, an
inspection of the proof of <ref> shows that an alternation
bound of ℓ=2 suffices to define addition, which is tight: if we only use a
bound of 1, we can also easily reduce to (,<).
The reduction from i0 to Presburger arithmetic encodes a word w
known to belong to (a_1^*⋯ a_n^*)^ℓ, i.e., of the form
∏_i=1^ℓ∏_j=1^n a_j^x_i,j,
by the vector
(x_1,1,…)∈^ℓ· n of exponents.
With this encoding, it suffices to show how to
express literals w w' (and also ww') by polynomial-size existential
Presburger formulas for w,w'∈ (a_1^*⋯ a_n^*)^ℓ. For a vector
x=(x_1,1,…,x_ℓ,n) from ^ℓ· n, let
w_x = ∏_i=1^ℓ∏_j=1^n a_j^x_i,j.
There are existential Presburger formulas φ and ψ of size polynomial in n and ℓ such that
φ (x_1,1,…,x_ℓ,n,y_1,1,…,y_ℓ,n) w_x w_y,
ψ (x_1,1,…,x_ℓ,n,y_1,1,…,y_ℓ,n) w_xw_y.
Let us briefly describe these formulas. Let I=[1,ℓ]×[1,n] and order
the pairs (i,j)∈ I
lexicographically: (i',j')<(i,j) if i'<i
or i=i' and j'<j. This captures the order of
the a_j^x_i,j factors in w_x. We now define
formulas τ_i,j and η_i,j where
the t_i,j,k's and e_i,j,k's are extra free variables:
τ_i,j ⋀_1 ≤ k ≤ℓ t_i,j,k =
0 if e_i',j',k'>0 for some
(i',j')<(i,j) and k'>k
y_k,j - ∑_i' = 1^i-1 e_i',j,k otherwise
η_i,j ⋀_1 ≤ k ≤ℓ e_i,j,k = min{ t_i,j,k , x_i,j-∑_r=1^k-1 e_i,j,r}
These expressions define the leftmost embedding of w_x into
w_y: the variable t_i,j,k describes how many letters from a_j^y_k,j
are available for embedding the a_j^x_i,j factor of w_x into w_y. The variable e_i,j,k counts how
many of these available letters are actually used for the a_j^x_i,j factor
in
the left-most embedding of w_x into w_y. Since i and j,k are bounded by n
and ℓ, we have polynomially many formulas of polynomial size.
Define ξ=⋀_(i,j)∈ Iτ_i,j∧η_i,j and the formulas
φ,ψ as:
φ[ ∃ t_1,1,1⋯∃ t_ℓ,n,ℓ; ∃ e_1,1,1⋯∃ e_ℓ,n,ℓ ]ξ∧⋀_(i,j)∈ I(x_i,j≤∑_k=1^ℓ e_i,j,k)
ψ[ ∃ t_1,1,1⋯∃ t_ℓ,n,ℓ; ∃ e_1,1,1⋯∃ e_ℓ,n,ℓ ]ξ∧⋁_(i,j)∈ I(x_i,j>∑_k=1^ℓ e_i,j,k)
Since formulas τ_i,j and η_i,j are inductive
equations that uniquely define the values of t_i,j,k and e_i,j,k
as functions of the x and y vectors, ψ is equivalent to the negation
of φ. Moreover, φ expresses that there is enough room to embed
each factor a_j^x_i,j in w_y, i.e., that w_x w_y as claimed,
and both formulas are easily constructed in polynomial time.
§.§ Complexity of 11
The truth problem for the 11 fragment is -complete.
Of course, hardness is inherited from 10. Conversely,
-membership is shown by a reduction to the 10 fragment. For
this reduction, we first explain how a single “unbounded” word can be made
alternation bounded while respecting its relationships with other
alternation bounded words.
For this we use a slightly different measure of alternation levels for words: we
factor words in blocks of repeating letters, writing
u=∏_i=1^ka_i^ℓ_i with ℓ_i>0 and a_i≠ a_i+1 for all
i. By “an a-block of u” we mean an occurrence of a factor a_i^ℓ_i with a_i=a.
We note that requiring some bound in the number of blocks is equivalent to
bounding the number of alternations when it comes to defining the ij
fragments. However, counting blocks is more precise.
Let t, x_1, …, x_n, y_1, …, y_m ∈ A^* such that:
* for all i, x_i t,
* for all j, y_j t,
* for all i and j, x_i and y_j have less than ℓ blocks,
* t has k > (m+n)·ℓ + |A| blocks.
Then there exists t' ∈ A^* such that:
* for all i, x_i t',
* for all j, y_j t',
* t' has either k-1 or k-2 blocks.
Given u ∈ A^*, we write u for the image of the
left-most embedding of u into t. This is a set of positions in t and, in case
ut, these positions only account for the longest prefix of u
that can be embedded in t. In particular, and since we assumed
x_i t and y_jt, then
| x_i|=|x_i| and | y_j|<|y_j| for all i,j.
Let b_0 be an a-block of t. This block is said to be irreducible if
either (1) it is the last, i.e. right-most, a-block of t, or (2) writing
t under the form t = t_0 b_0 t_1 b_1 t_2 where b_1 is the next a-block,
i.e. a ∉ t_1, one of the following holds:
* there is some i s.t. b_0 ∩ x_i≠∅ and t_1 ∩ x_i≠∅.
* there is j s.t. b_0 ∩ y_j = ∅ and t_1 ∩ y_j ≠∅ and b_1 ∩ y_j ≠∅.
Otherwise b_0 is said to be reducible.
Claim: t contains a reducible block.
Indeed, every irreducible block is either a right-most a-block for some a,
or can be associated with a letter alternation in some x_i, or in some
y_j. Furthermore, this association is injective. Thus there are at most
(n+m)·ℓ irreducible blocks that are not right-most (and at most |A|
right-most blocks). Since k > (n+m)·ℓ + |A|, t has a reducible
block.
So let us pick one such reducible block, say an a-block b_0,
write t under the form t = t_0 b_0 t_1 b_1 t_2 as above, and
let t' = t_0 t_1 b_0 b_1 t_2.
Claim: t' fulfills the requirements of
<Ref>.
Since b_1 is an a-block, b_0b_1 is now a block of t' and t' has less
than k blocks. Moreover, the only other possible block merge is in t_0t_1,
thus t' has at least k-2 blocks. We now show that x_i t'
and y_jt' for all i,j.
* Pick some i. Since x_i t, there
is a unique decomposition x_i = u_0 u_1 u_2 u_3 u_4 of x_i such that
u_0 ⊆ t_0, u_1 ⊆ b_0, u_2 ⊆
t_1, u_3 ⊆ b_1 and u_4 ⊆ t_2.
Since b_0 is reducible one of u_1 or u_2 is empty.
Thus one of u_1 or u_2 is the empty word, allowing x_i t'.
* Assume, by way of contradiction, that for some j, y_j t'.
Let z_1 be the maximal prefix of y_j that embeds into t.
We proceed to show that b_0 is irreducible.
* First, b_0 ∩ z_1 = ∅. Otherwise, since a ∉ t_1,
the left-most embedding of z_1 into t' = t_0 t_1 b_0 b_1 t_2 does not use
t_1 at all and we would have y_j t_0 b_0 b_1 t_2 t.
* Secondly, t_1 ∩ z_1 is not empty. If it were, since b_0 is
made of a's only and a ∉ t_1, the left-most embedding of z_1 into
t_0 t_1 b_0 b_1 t_2 would not use t_1 and again we would have y_j t_0 b_0 b_1 t_2 t.
* Lastly, b_1 ∩ z_1 ≠∅.
Otherwise, the already established fact b_0∩ z_1=∅
implies that y_j embeds not only in t' but in t_0t_1t_2, which is a subword of t.
Since b_0 is reducible, we conclude that the original assumption that
y_j t' does not hold, i.e., that y_jt' as required.
We now proceed to prove <Ref>. Let φ be a
11 sentence, where t is the only variable which is not
alternation bounded. As a first step of our algorithm, we guess
the set of literals occurring in φ that is satisfied. After
guessing this subset, we check whether the formula would be satisfied
if exactly this subset were true (which essentially amounts to
evaluating a formula in propositional logic). If this is the
case, it remains to check whether it is possible to choose words for
all existential quantifiers of φ so that exactly this subset
of literals is true. This means, we are left with the task of checking
satisfiability of a formula φ of the form φ≡∃
tψ: Here, ψ begins with existential quantifiers for
alternation bounded variables, which are followed by a conjunction of
literals.
Every literal in ψ that involves t is of one of the following types:
* x t, where x is an alternation bounded variable,
* y t, where x is an alternation bounded variable,
* t u, where u is an alternation bounded variable,
* t z, where z is an alternation bounded variable,
* t t,
* t t.
Assertions of <ref> can be replaced by their truth
value. If a literal t z of <ref> occurs in
ψ, then φ is equivalent to ∃ t ∈ (a_1^* ⋯
a_n^*)^ℓψ,
where ℓ is the alternation bound of variable z.
We can thus assume that only literals of <ref> occur in ψ. Let n
be the number of variables x that occur in literals of <ref>, m the
number of variables y that occur in literals of <ref>, ℓ the
maximum alternation level of these variables, and k the maximum alternation
bound of all variables u that appear in literals of <ref>. Let
p = max{ (m+n)· (ℓ· |A|) + |A|, k· |A| +3}.
(here ℓ and k
are multiplied by |A| to obtain a number of blocks from a maximum
alternation). Then φ is equivalent to ∃ t ∈ (a_1^* ⋯
a_n^*)^pψ. Indeed, if the restricted formula has a solution,
it is a solution for ψ. Conversely, if ψ is satisfiable via some
t∈ A^* having more than p blocks, then by <Ref>, one can
also use a t having between k· |A| and p blocks. The fact that t
has more than k· |A| blocks ensures that all literals t u
are still satisfied.
Finally, we can replace every ∃ t in φ by a bounded
quantification and obtain an equivalent formula in 10 which proves
<Ref>
To the authors' knowledge, the following was not known.
If PT languages are represented as boolean combinations of sets of the form
w with w∈ A^*, then their non-emptiness problem is -complete.
Membership in follows from <ref>. For hardness, we can
reduce CNF-SAT as follows. We encode an assignment
α{x_1,…,x_n}→{0,1} as a word
b a^α(x_1)b a^α(x_2)⋯ b a^α(x_n). With literals
x_i and x_i, we associate the languages
K_x_i=b^ia b^n-i and K_ x_i={a,b}^*∖b^ia b^n-i. A clause C=L_1∨⋯∨ L_m is then translated to
K_C=K_L_1∪⋯∪ K_L_n and a conjunction of clauses C_1∧⋯∧ C_k
is satisfiable if, and only if, the PT language (b(a+))^n∩ K_C_1∩⋯∩
K_C_k is nonempty.
In particular, given a finite
number of PT languages, the problem of deciding whether they intersect non-vacuously is
-complete. This is in contrast with general regular languages represented
by DFAs (or NFAs), for which the problem is well-known to be -complete <cit.>.
§.§ Complexity of 12
Our next result is an upper bound for the truth problem of 12.
The truth problem for 12 is in .
We prove <ref> in two steps. The
first step of our decidability result is to transform a 12 formula
into a system of constraints where the relations among those variables without
an alternation bound have a tree shape. In the second step, we exploit the tree
shape to construct an exponential-size counter automaton for the set of
satisfying assignments.
§.§ Tree-shaped constraints
Let A={a_1,…,a_n} and let V be a set of variables. A
constraint system is a set of constraints of the form x y,
xy, x=y, x∈(a_1^*⋯ a_n^*)^ℓ, or x=w, where
x,y∈ V, ℓ∈ and w∈ A^*. A constraint of the form x y,
xy, or x=y is also called (x,y)-constraint or
(y,x)-constraint. Constraints of the form x∈ (a_1^*⋯
a_n^*)^ℓ are called alternation constraints. The set of assignments
α∈ (A^*)^V that satisfy S is denoted by S. For a
subset U⊆ V, by existentially quantifying all variables outside of
U, the constraint system S also defines a set of assignments in (A^*)^U, which we denote by
[U]S.
For a constraint system S over V, we define the graph Γ(S)=(V,E)
where {x,y}∈ E if and only if S contains an (x,y)-constraint. We say
that S is tree-shaped if Γ(S)
is a forest. Furthermore, S is called alternation bounded if every
variable occurring in S also has an alternation constraint in S.
For any disjunction-free 12-formula φ, one can construct
polynomial-size constraint systems T and S over variables V'⊇φ such that
* T is tree-shaped,
* S is alternation bounded, and
* [φ]T∪ S=φ.
We will need the notion of quotients of constraint systems. The idea is to
identify certain pairs of variables. Suppose S is a constraint system over
V. Furthermore, let ∼⊆ V× V be an equivalence relation that
specifies which variables we want to identify with each other. Then we define
the quotient S/∼ as a constraint system over the variable
set V/∼ with the constraints
S/∼ = { [x]δ [y] |δ∈{,}, (xδ y)∈ S}.
∪ {[x] = w | (x=w)∈ S }
∪ {[x]∈ (a_1^*⋯ a_n^*)^ℓ| (x∈ (a_1^*⋯ a_n)^ℓ) ∈ S}
In the course of constructing the constraint systems, it will be convenient to
assume that two constraint systems are defined over disjoint sets of variables.
To this end, we need some notation to state that a constraint system is
equivalent to a formula even though its variables have different names.
Suppose we have a constraint system S over the set of variables V' and
ψφ→ V' is an injective map. Then, via ψ, the
formula φ defines a set of assignments
[ψ]φ⊆ (A^*)^ψ. We say that
φ and S are ψ-equivalent if
[ψ]S=[ψ]φ.
We may clearly assume that all literals are of the form
x y, xy, or x=w for w∈ A^* (and there are no
literals w x etc.).
We show the following stronger statement. Let B be the set of variables in
φ that are alternation bounded. For each disjunction-free
12-formula φ, there is a set of variables V', constraint
systems T and S over V', an injective map ψφ→ V' such that the following holds. If B'⊆ V'
denotes the set of variables for which there is an alternation bound in S,
then
* T is tree-shaped,
* S is alternation-bounded,
* T∪ S and φ are ψ-equivalent,
* for every x∈ B we have ψ(x)∈ B', and
* if |φ∖ B|=2 with φ∖ B={x,y}, then
ψ(x) and ψ(y) are either neighbors in Γ(T) or in distinct components.
To prove this statement by induction, we need to consider three cases.
* Literals, i.e. x y, xy, or x=w. There are only two
variables so we can just take the literal as the set T and let S
contain the global alternation constraints for the variables in the literal.
* Existentially quantified formulas ∃ xφ. Here, we
just reduce the set of free variables, so it suffices to adjust the map ψ.
* Conjunctions φ≡φ_0∧φ_1. Suppose we have
constructed T_i,S_i,V'_i,ψ_i,B'_i as above for i=0,1. We may clearly
assume V_0∩ V_1=∅. We construct T,S,V',ψ,B' as follows. Let
∼⊆ (V'_0∪ V'_1)× (V'_0∪ V'_1) be the smallest
equivalence relation with ψ_0(x)∼ψ_1(x) for all
x∈φ∖ B. Then we take V'=(V'_0∪ V'_1)/∼
and define T=(T_0∪ T_1)/∼. Moreover, let
S=S_0∪ S_1∪{[ψ_0(x)]=[ψ_1(x)] | x∈φ∩ B}.
Moreover, we choose ψφ→ V' so that
ψ(x)=ψ_0(x) if x∈φ_0 and ψ(x)=ψ_1(x) if
x∉φ_0.
It is clear that <ref> above are
satisfied. It remains to verify <ref>.
We distinguish the following cases.
* |φ_i∖ B|≤ 1 for some i∈{0,1}. Then
there is at most one variable in V'_i that is identified with a variable in
V'_1-i by ∼. Hence, Γ(T) is obtained from Γ(T_0) and
Γ(T_1) either by disjoint union or by identifying one vertex from
Γ(T_0) with one vertex from Γ(T_1). In any case, Γ(T) is a
forest. Hence, <ref> is satisfied.
Let us now show <ref>. Hence, assume |φ∖
B|=2 with φ∖ B={x,y}.
Clearly, if φ_0∖ B and φ_1∖
B are disjoint, then no variables are identified and hence ψ(x) and
ψ(y) are in distinct components of Γ(T). Hence, we assume that
φ_0∖ B and φ_1∖ B have a
variable in common, say x.
Since |φ_0∖ B|≤ 1, this means
φ_1∖ B={x,y} and hence ψ_1(x) and ψ_1(y)
are neighbors in Γ(T_1) or they are in distinct components of Γ(T_1).
Γ(T) is obtained from Γ(T_0) and Γ(T_1) by identifying
ψ_0(x) and ψ_1(x). Therefore, ψ(x) and ψ(y) are neighbors
in Γ(T) if and only if they are neighbors in Γ(T_1). Moreover,
they are in disjoint components of Γ(T) if and only if they are in
disjoint components of Γ(T_1). This proves <ref>.
* |φ_0|=|φ_1|=2. Write
φ_0∖ B=φ_1={x,y}. Note that
Γ(T) is obtained from Γ(T_0) and Γ(T_1) by identifying
ψ_0(x) with ψ_1(x) and identifying ψ_0(y) with ψ_1(y).
Since for each i∈{0,1} we know that ψ_i(x) and ψ_i(y) are
either neighbors in Γ(T_i) or in distinct components, this clearly
implies that Γ(T) is a forest. Hence, we have shown <ref>.
Moreover, if for some i∈{0,1}, ψ_i(x) and ψ_i(y) are neighbors
in Γ(T_i), then ψ(x) and ψ(y) are neighbors in Γ(T).
Otherwise, ψ(x) and ψ(y) are in disjoint components of Γ(T).
This proves <ref>.
§.§ Counter automata
In the next step, we exploit the decomposition into a tree-shaped constraint
system and an alternation-bounded constraint system to reduce satisfiability to
non-emptiness of counter automata.
To this end, we use a type of counter automata known as Parikh
automata <cit.>. In terms of expressiveness,
these are equivalent to the classical reversal-bounded counter
automata <cit.>, but their syntax makes them convenient for our
purposes.
Let V be a finite set of variables. A counter automaton over V is
a tuple =(Q,A,C,E,q_0,F), where Q is a finite set of states, A
is the input alphabet, C is a set of counters,
E⊆ Q× (A∪{})^V ×^C × Q
is the finite set of edges, q_0∈ Q is the
initial state, and F is a finite set of pairs
(q,φ), where q∈ Q and φ is an existential Presburger
formula with free variables in C. A configuration of is a tuple
(q,α,μ), where q∈ Q, α∈ (A^*)^V, μ∈^C. The step
relation is defined as follows. We have
(q,α,μ)[] (q',α',μ')
iff there is an edge (q,β,ν,q')∈ E such that α'=αβ and μ'=μ+ν.
A counter automaton accepts a set of assignments, namely
={α∈(A^*)^V |∃ (q,φ)∈ F
(q_0,,0)[] (q,α,μ), μφ} .
We call a subset R⊆ (A^*)^V a counter relation if there is a
counter automaton with R=. If |V|=1, say V={x}, then
defines a subset of A^*, namely the language {w∈ A^* | (x↦ w)∈}. Languages of this form are called counter languages.
Suppose V_0,V_1 are sets of variables with |V_0∩ V_1|≤ 1. Let
_i=(Q_i,A,C_i,E_i,q_0,i,F_i) be a counter automaton over V_i for
i=0,1 such that C_0∩ C_1=∅. Then a simple product construction
yields a counter automaton
_0⊗_1=(Q_0× Q_1,A,C_0∪ C_1,E,(q_0,0,q_0,1),F) over V_0∪ V_1
such that ((p_0,p_1), α, μ)[_0⊗_1] ((p'_0,p'_1),α',μ') iff
(p_i,α|_V_i, μ|_C_i) __i (p'_i,α'|_V_i,μ'|_C_i)
for i=0,1 and
F={((p_0,p_1), φ_0∧φ_1) |(p_i,φ_i)∈ F_i for i=0,1}.
Given a tree-shaped constraint system T, one can construct in exponential time
a counter automaton with =T.
First, observe that it suffices to consider the case where every constraint in
T involves two variables: The other constraints have the form x=w for some
w∈ A^* or x∈(a_1^*⋯ a_n^*)^ℓ for some ℓ∈ and can easily
be imposed afterwards in the counter automaton.
We construct the automaton inductively. The statement is trivial if T
involves only one variable, so assume |V|≥ 2 from now on.
Since Γ(T) is a forest, it contains a vertex x∈ V with at most one
neighbor. Let T' be the constraint system obtained from T by removing all
constraints involving x and suppose we have already constructed a counter
automaton ' with '=T'.
Now if x has no neighbor, it is easy to construct the automaton for T. So
suppose x has a unique neighbor y. Then, the additional constraints
imposed in T are all (x,y)-constraints. Let T” be the set of all
(x,y)-constraints in T. Now note that if ” is a counter automaton
with ”=T”, then we have
'⊗” = T'∪ T”=T.
Therefore, it suffices to construct in polynomial time a counter automaton ”
with ”=T”.
Observe that any set of (x,y)-constraints can be written as a disjunction of
one of the following constraints:
(i) x=y (ii) x y (iii) y x (iv) x y
Since it is easy to construct a counter automaton for the union of two
relations accepted by counter automata, it suffices to construct a counter
automaton for the set of solutions to each of the constraints
(i)—(iv).
This is obvious in all cases but the last. In that last case, one can notice
that x y holds if either
* |x|<|y| and xy or
* |y|<|x| and yx or
* |x|=|y| and x y.
Note that each of these cases is easily realized in a counter automaton since
we can use the counters to guarantee the length constraints. Moreover, the
resulting counter automaton can clearly be constructed in polynomial time,
which completes the proof.
We can now prove <ref> by taking the
constraint system provided by <ref> and
construct a counter automaton just for T using
<ref>. Then, we can impose the
constraints in S by using additional counters. Note that since all variables
in S are alternation bounded, we can store these words, in the form of their
occurring exponents, in counters. We can then install the polynomial-size
Presburger formulas from <ref> in the counter automaton
to impose the binary constraints required by S. This results in an
exponential size counter automaton that accepts the satisfying assignments of
φ. The upper bound then follows from the fact that
non-emptiness for counter automata is -complete.
Since counter automata are only a slight extension of reversal-bounded
counter automata, the following is well-known.
The non-emptiness problem for counter automata is -complete.
Given a counter automaton =(Q,A,C,E,q_0,F), and a state q∈ Q, we
can construct an existential Presburger formula θ_q with a free variable
for each edge in that is satisfied for an assignment ν∈^E iff
there is a run from q_0 to q where each edge e∈ E occurs exactly
ν(e) times. This is just the fact that we can construct in polynomial time
an existential Presburger formula for the Parikh image of a finite
automaton <cit.>.
For each edge e=(p,α,μ,p'), define μ_e=μ. Then the formula
⋁_(q,φ)∈ Fθ_q∧⋀_c∈ C c = μ_e(c)· e ∧φ
expresses precisely that there is an accepting run. The fact that the
satisfiability problem for existential Presburger arithmetic is
-complete <cit.> now gives us the upper bound as well as the
lower bound.
We are now ready to prove <ref>.
First we use <ref> to turn the formula
φ into constraint systems T and S such that T is tree-shaped, S
is alternation bounded, and [φ]T∪
S=φ. Then we use
<ref> to to obtain in exponential time a
counter automaton with =T.
Let U={x_1,…,x_m} be the set of variables occurring in S. It
remains to impose the constraints in S. We do this by first building the
product with one automaton _i for each x_i. This automaton imposes the
alternation constraint on x_i and stores the word read into x_i in a set of
counters. Note that this is possible because the word is alternation bounded.
After taking the product with all these automata, we impose the remaining constraints
from S (which are binary constraints or of the form x=w with x∈ U, w∈
A^*) by adding existential Presburger formulas that express subword
constraints as provided by <ref>.
We may clearly assume that whenever there is a variable x, an alternation
constraint x∈(a_1^*⋯ a_n^*)^ℓ, and a constraint x=w, then
w∈(a_1^*⋯ a_n^*)^ℓ: Otherwise, the system is not satisfiable and
clearly has an equivalent counter automaton.
Let A={a_1,…,a_n}. Since S is alternation bounded, S contains an
alternation constraint x_i∈ (a_1^*⋯ a_n^*)^ℓ_i for each
i∈[1,m]. Let ℓ be the maximum of all these ℓ_i. We will use the
counter variables c_i,j,k for each i∈ [1,m], j∈ [1,ℓ], and k∈
[1,n]. We set up the automaton _i over V_i={x_i} such that it has an
initial state q_0, a state q_1, and satisfies
(q_0,,0) [_i] (q_1,α,μ)
if and only if α maps x_i to the word
a_1^μ(c_i,1,1)⋯ a_n^μ(c_i,1,n)⋯ a_1^μ(c_i,ℓ_i,1)⋯ a_n^μ(c_i,ℓ_i,n).
This can clearly be done with n·ℓ states. Moreover, let
F_i={(q_1,⊤)}. Note that _i has the counters c_i,j,k even for
j>ℓ_i although it never adds to them. The reason we have the variables
c_i,j,k for j>ℓ_i is that this way, the formulas from
<ref> are applicable.
Note that since each V_i is a singleton, the automaton
=×_1⊗⋯⊗_m is defined. It satisfies
=S', where S' is the set of alternation
constraints in S.
It remains to impose the remaining constraints from S, namely the
binary constraints and those of the form x=w with x∈ U, w∈ A^*. Let
R⊆ S be the set of these remaining constraints. For each i and for
μ∈ (A^*)^V, let w_μ,i be the word in
<ref>. According to <ref>,
for each constraint r∈ R, we can construct a polynomial size existential
Presburger formula κ_r such that μκ_r if and only if
the constraint is satisfied for the assignment α with
α(x_i)=w_μ,i. Moreover, let κ=⋀_r∈ Rκ_r.
Suppose =(Q,A,C,E,q_0,F).
In the last step, we construct the counter automaton '=(Q,A,C,E,q_0,F'), where
F'={(q,ψ∧κ) | (q,ψ)∈ F }.
Now ' clearly satisfies '=T∪ S. Thus, if
we obtain ” from ' by projecting the input to those variables that
occur freely in φ, then
”=[φ]T∪ S. Moreover, ” can
clearly be constructed in exponential time.
The membership of the truth problem in follows from the fact that
emptiness of counter automata is in (<ref>).
§ EXPRESSIVENESS
In this section, we shed some light on which predicates or languages are
definable in our fragments ij.
§.§ Expressiveness of the 10 fragment
A language L definable in 10 always satisfies
L⊆ (a_1^*⋯ a_n^*)^ℓ for some ℓ∈. Hence, it can be
described by the set of vectors that contain the occurring exponents. As can
be derived from results in <ref>, these sets are always semilinear.
In this section, we provide a decidable characterization of the semilinear sets
that are expressible in this way. Stating the characterization requires some
terminology.
Let V be a set of variables. By ^V, we denote the set of mappings
V→. By a partition of V, we mean a set P={V_1,…,V_n}
of subsets V_1,…,V_n⊆ V such that V_i∩ V_j=∅ for
i j and V_1∪⋯∪ V_n=V. If U∩ V=∅ and α∈^U, β∈^V, we write α×β for the map
γ∈^U∪ V such that γ|_U=α and γ|_V=β.
Furthermore, if S⊆^U, T⊆^V, then S×
T={α×β|α∈ S, β∈ T}.
A semilinear set S⊆^V is P-compatible if it has
a semilinear representation where each occurring period vector belongs
to ^V_i for some i∈[1,n].
Suppose L⊆ (a_1^*⋯ a_n^*)^ℓ. Let V={x_i,j|
i∈[1,ℓ], j∈[1,n]} and consider the partition P={V_1,…,V_n}
where V_j={x_i,j| i∈[1,ℓ]} for j∈[1,n]. The language L is
definable in 10 if, and only if, the set
{α∈^V | a_1^α(x_1,1)⋯ a_n^α(x_1,n)⋯ a_1^α(x_ℓ,1)⋯ a_n^α(x_ℓ,n)∈ L }
is a P-compatible semilinear set.
For example, this means we can define {a^nba^n | n∈}, but not
{a^nb^n| n∈}: A semilinear representation for the latter requires
a period that produces both a's and b's.
The proof of <ref> employs a characterization of
P-compatible sets in terms of Presburger arithmetic. Let V be a set of
variables and φ be a Presburger formula whose variables are in V.
Let P={V_1,…,V_n} be a partition of V. We say φ is
P-compatible if there is a set of variables V'⊇ V and a
partition P'={V'_1,…,V'_n} of V' such that
* V_j⊆ V'_j for each j∈[1,n] and
* in each literal in φ, all variables belong to the same set V'_j
for some j∈[1,n].
The following is a simple observation.
Let P={V_1,…,V_n} be a partition of V. For sets S⊆^V,
the following conditions are equivalent:
* S is a P-compatible semilinear set.
* S=φ for some P-compatible existential
Presburger formula φ.
* S is a finite union of sets of the form A_1×⋯× A_n
where each A_j is a semilinear subset of ^V_j.
The directions partition:cartesianpartition:semilinear and
partition:semilinearpartition:formula are easy to see, so we show
partition:formulapartition:cartesian.
If a set satisfies the condition of <ref>, then projecting
to a subset of the coordinates yields again a set of this form. Therefore, it
suffices to consider the case where in φ, there are no quantifiers.
Now, bring φ into disjunctive normal form. Since each literal in
φ only mentions variables from V_j for some j∈[1,n], we can sort
the literals of each co-clause of the DNF according to the subset V_j they
mention. Hence, we arrive at the form
φ≡⋁_i=1^k ⋀_j=1^n φ_i,j,
where φ_i,j only mentions variables from V_j. This implies
φ = ⋃_i=1^k φ_1,j×⋯×φ_n,j,
which is the form required in <ref>.
We are now ready to prove <ref>.
If L is definable in 10, we can write down a Presburger formula
that defines S. Here, in order to express the subword ordering (and its
negation), we use the formulas from <ref>. Observe
that these formulas are P-compatible. This means that S is P-compatible.
For the converse, suppose S is P-compatible. According to
<ref><ref>, S is defined by a
P-compatible existential Presburger formula φ. Hence, φ has
free variables V={x_i,j| i∈ [1,ℓ], j∈[1,n]} and uses variables
V'⊇ V that are partitioned as V'=_j=1^n V'_j so that in
each literal, all occurring variables belong to the same V'_j.
In the first step, we turn φ into a 10 formula
φ̅ with the same number of free variables. For each x∈ V', we
take a fresh variable x̅, which will hold words in a_j^*. More
precisely, we have our new variables V̅={x̅| x∈ V'} and a
mapping ι^V'→ (A^*)^V̅ defined by
ι(α)(x̅)=a_j^α(x), where j is the unique index with
x∈ V'_j. We want to construct φ̅ so that
φ̅=ι(φ).
We obtain φ̅ from φ as follows. For each literal x=y+z,
we know that there is a j∈[1,n] with x,y,z∈ V'_i, so we can replace
the literal with |x̅|_a_j=|y̅|_a_j+|z̅|_a_j, which is
expressible in 10 according to <ref> in the proof of
<ref> (note that in this case, we actually are in
10 because the variables x̅, y̅, and z̅ range
over a_j^* and are thus alternation bounded). Since we can clearly also
express |x̅|_a_j |y̅|_a_j in 10, we use this to
implement literals x y with x,y∈ V'_j. Literals of the form x=k
with k∈ and x∈ V'_j can just be replaced by x̅=a_j^k. Then we
clearly have φ̅=ι(φ).
In the second step, we construct the words
a_1^α(x_1,1)⋯ a_n^α(x_1,n)⋯ a_1^α(x_ℓ,1)⋯ a_n^α(x_ℓ,n)
for α∈φ. This is possible thanks to
<ref> of the proof of <ref>. We can
express
u=x̅_1,1⋯x̅_1,n⋯x̅_ℓ,1⋯x̅_ℓ,n
by applying <ref> exactly ℓ· n-1 times, once to
append each x_i,j to the word defined so far, using ℓ· n-1
additional variables. Of course, all these variables can be restricted to
(a_1^*⋯ a_n^*)^ℓ, which means the resulting formula belongs to
10. Moreover, it clearly defines L.
Our characterization of 10 is decidable.
We use a technique from <cit.>, where it is shown that
recognizability is decidable for semilinear sets. The idea is to characterize
P-compatibility as the finiteness of the index of certain equivalence
relations, which can be expressed in Presburger arithmetic.
Given a semilinear subset S⊆^V and a partition P of V, it is
decidable whether S is P-compatible.
For α∈^V_i and γ∈^V, we write
γ[i/α] to be the element of ^V with
γ[i/α](v)=α(v) if v∈ V_i,
γ(v) otherwise.
For α,β∈^V_i, we write α∼_iβ if for every
γ∈^V, we have γ[i/α]∈ S if and only if
γ[i/β]∈ S. Moreover, for γ∈^V, we will use the norm
· as defined by γ=∑_v∈ Vγ(v). We claim that
S is P-compatible if and only if
∃ k∈⋀_i=1^n ∀α∈^V_i∃β∈^V_iβ≤ k, α∼_iβ.
Suppose <ref> holds. For each β∈^V_i, we define
S_i,β={α∈^V_i|α∼_iβ}.
Then S_i,β⊆^V_i is semilinear and we have
S = ⋃_β_1∈^V_1,β_1≤ k⋯⋃_β_n∈^V_n,β_n≤ k S_1,β_1×⋯× S_n,β_n.
Hence, S is P-compatible.
Now assume S is P-compatible. Then we can write S=⋃_j=1^ℓ
A_j,1×⋯× A_j,n, where each A_j,i⊆^V_i is
semilinear. For each i∈[1,n], consider the function κ_i^V_i→{1,…,ℓ} with
κ_i(α)={j∈[1,ℓ] |α∈ A_j,i}.
Observe that if κ_i(α)=κ_i(β), then α∼_iβ.
Since κ_i has a finite codomain, this means the equivalence relation
∼_i on ^V_i has finite index. This immediately implies
<ref>.
Since we can clearly formulate the condition <ref> in
Presburger arithmetic, P-compatibility is decidable.
In fact, it is not
hard to see that if P consists only of singletons, a semilinear set is
P-compatible iff it is recognizable. Hence,
<ref> generalizes the decidability of
recognizability.
Let M be a monoid. A subset S⊆ M is called recognizable if
there is a finite monoid F and a morphism φ M→ F such that
S=φ^-1(φ(S)).
Suppose P consists only of singletons. Then S⊆^V is
P-compatible if and only if it is recognizable.
Mezei's Theorem <cit.> states that if M_1,…,M_n are monoids,
then a subset of M_1×⋯× M_n is recognizable if and only if it
is a finite union of sets S_1×⋯× S_n such that S_i⊆
M_i is recognizable for i∈{1,…,n}.
Combined with the fact that a subset of is semilinear if and only if it is
recognizable, the condition <ref> of
<ref> yields the result.
§.§ Expressiveness of 10 vs. 11
It is obvious that
11 is strictly more expressive than 10, because it permits the definition of
languages with unbounded alternations, such as {a,b}^*. But is this the
only difference between the two fragments? In other words: Restricted to alternation
bounded languages, is 11 more expressive? The answer is no.
If L⊆ (a_1^*⋯ a_n^*)^ℓ, then L is definable in
11 if and only if it is definable in 10.
Let φ be a 11 formula where the free variable is
alternation bounded and the variable t is not alternation
bounded. We can transform φ into a disjunction
⋁_i=1^k φ_i, where each φ_i belongs to
11 and consists of a block of existential quantifiers
followed by a conjunction of literals. Then, the proof of
<ref> yields for each φ_i a (polynomial) bound
p_i so that if we replace the quantifier ∃ t in
φ_i by ∃ t∈ (a_1^*⋯ a_n^*)^p_i, the
resulting 10 formula is equivalent.
This allows us to reason beyond alternation bounded languages. We have seen in
the proof of <ref> that one can express “|u|_a=|v|_b”
in 13, which required significantly more steps, and two more
alternation unbounded variables, than the ostensibly similar “|u|_a=|v|_a”.
This raises the question: Can we define the former in 11? We cannot:
The predicate “|u|_a=|u|_b” is not definable in 11.
Otherwise, we could define the set {a^nb^n |
n≥ 0} in 11, hence in 10, contradicting <ref>.
§.§ Expressiveness of 12 vs. 13
We have seen in <ref> that 13 can express all
recursively enumerable unary languages. Moreover,
<ref> tells us that the languages
definable in 12 are always counter languages.
How do the fragments compare with respect to natural (binary) predicates on
words. We already know from <ref> in <ref>
that over two letters, the prefix relation is expressible in 13. Note that the
following expressiveness:prefix does not follow directly from the
fact that for any 12 formula φ, the set
φ is a counter relation, as shown in <ref>.
This is because the prefix relation is a counter relation (and even rational).
In 12, “u is a prefix of v” is not expressible.
Suppose the prefix relation were expressible using a 12 formula
φ. Then, by reversing all constants in φ, we obtain a formula
expressing the suffix relation. Let denote the prefix relation and
the suffix relation. We can now express
∃ v∈{a,b}^* v u∧ v u ∧ |u|_a=2· |v|_a ∧ |u|_b=2· |v|_b,
which is equivalent to u∈ S, where S={vv | v∈{a,b}^*}. Note
that |u|_a=2· |v|_a can be expressed by using |u|_a=|v|_a+· |v|_a,
which can be done in 10 according to <ref> in
<ref>.
However, S is not a counter language. This is due to the fact that the class
of recursively enumerable languages is the smallest language class that
contains S, is closed under rational transductions, union, and intersection
(this can be shown as in the case of the set of
palindromes <cit.>). However, the class of counter languages
also has these closure properties and is properly contained in the recursively
enumerable languages. Hence, S is indeed not a counter language. This is in
contradiction with the fact that for any 12 formula φ,
the set φ is a counter relation, as shown in <ref>.
§ CONCLUSION
We have shown that the Σ_1 theory of the subword ordering is undecidable
(already for two letters), if all words are available as constants. This
implies that the Σ_2 theory is undecidable already for two letters, even
without constants.
In order to shed light on decidable fragments of first-order logic over
the structure A, we introduced the fragments ij. We have completely
settled their decidability status. In terms of complexity, the only open case
is the 12 fragment. We have an lower bound and an
upper bound.
This aligns with the situation for expressiveness. We have a decidable
characterization for the expressiveness of 10 and, obvious
exceptions aside, 11 is as expressive as 10. However, we
do not know whether 11 and 12 differ significantly: Of
course, 12 can have two alternation unbounded free variables, but it
is conceivable that 11 and 12 define the same languages.
|
http://arxiv.org/abs/1702.00063v1 | 20170126042054 | Sequential Convex Programming for the Efficient Verification of Parametric MDPs | [
"Murat Cubuktepe",
"Nils Jansen",
"Sebastian Junges",
"Joost-Pieter Katoen",
"Ivan Papusha",
"Hasan A. Poonawala",
"Ufuk Topcu"
] | cs.LO | [
"cs.LO"
] |
Theory of Scanning Tunneling Spectroscopy: from Kondo Impurities to Heavy Fermion Materials
Dirk K. Morr
December 30, 2023
===========================================================================================
Multi-objective verification problems of parametric Markov decision
processes under optimality criteria can be naturally expressed as
nonlinear programs. We observe that many of these computationally
demanding problems belong to the subclass of signomial programs.
This insight allows for a sequential optimization algorithm to
efficiently compute sound but possibly suboptimal solutions. Each stage
of this algorithm solves a geometric programming problem. These geometric
programs are obtained by convexifying the nonconvex constraints of the
original problem.
Direct applications of the encodings as nonlinear programs are model repair and parameter synthesis.
We demonstrate the scalability and quality of our
approach by well-known benchmarks.
§ INTRODUCTION
We study the applicability of convex optimization to the formal verification of systems that exhibit randomness or stochastic uncertainties. Such systems are formally represented by so-called parametric Markov models.
In fact, many real-world systems exhibit random behavior and stochastic uncertainties. One major example is in the field of robotics, where the presence of measurement noise or input disturbances requires special controller synthesis techniques <cit.> that achieve robustness of robot actions against uncertainties in the robot model and the environment.
On the other hand, formal verification offers methods for rigorously proving or disproving properties about the system behavior, and synthesizing strategies that satisfy these properties.
In particular, model checking <cit.> is a well-studied technique that provides guarantees on appropriate behavior for all possible events and scenarios.
Model checking can be applied to systems with stochastic uncertainties, including discrete-time Markov chains (MCs), Markov decision processes (MDPs), and their continuous-time counterparts <cit.>.
Probabilistic model checkers are able to verify reachability properties like “the probability of reaching a set of unsafe states is ≤ 10%” and expected costs properties like “the expected cost of reaching a goal state is ≤ 20.”
A rich set of properties, specified by linear- and branching-time logics, reduces to such properties <cit.>.
Tools like PRISM <cit.>, STORM <cit.>, and
iscasMc <cit.> are probabilistic model checkers capable of handling a wide range of large-scale problems.
Key requirements for applying model checking are a reliable system model and formal specifications of desired or undesired behaviors.
As a result, most approaches assume that models of the stochastic uncertainties are precisely given. For example, if a system description includes an environmental disturbance, the mean of that disturbance should be known before formal statements are made about expected system behavior.
However, the desire to treat many applications where uncertainty measures (, faultiness, reliability, reaction rates, packet loss ratio) are not exactly known at design time
gives rise to parametric probabilistic models <cit.>. Here, transition probabilities are expressed as functions over system parameters, , descriptions of uncertainties.
In this setting, parameter synthesis addresses the problem of computing parameter instantiations leading to satisfaction of system specifications.
More precisely, parameters are mapped to concrete probabilities inducing the resulting instantiated model to satisfy specifications.
A direct application is model repair <cit.>, where a concrete model (without parameters) is changed (repaired) such that specifications are satisfied.
Dedicated tools like PARAM <cit.>, PRISM <cit.>, or <cit.> compute rational functions over parameters that express reachability probabilities or expected costs in a parametric Markov chain (pMC). These optimized tools work with millions of states but are restricted to a few parameters, as the necessary computation of greatest common divisors does not scale well with the number of parameters.
Moreover, the resulting functions are inherently nonlinear and often of high degree. Evaluation by an SMT solver over nonlinear arithmetic such as Z3 <cit.> suffers from the fact that the solving procedures are exponential in the degree of polynomials and the number of variables.
This paper takes an alternative perspective.
We discuss a general nonlinear programming formulation for the verification of parametric Markov decision processes (pMDPs).
The powerful modeling capabilities of nonlinear programs (NLPs) enable incorporating multi-objective properties and penalties on the parameters of the pMDP.
However, because of their generality, solving NLPs to find a global optimum is difficult. Even feasible solutions (satisfying the constraints) cannot always be computed efficiently <cit.>.
In contrast, for the class of NLPs called convex optimization problems, efficient methods to compute feasible solutions and global optima even for large-scale problems are available <cit.>.
We therefore propose a novel automated method of utilizing convex optimization for pMDPs.
Many NLP problems for pMDPs belong to the class of signomial programs (SGPs), a certain class of nonconvex optimization problems.
For instance, all benchmarks available at the PARAM–webpage <cit.> belong to this class.
Restricting the general pMDP problem accordingly yields a direct and efficient synthesis method—formulated as an NLP—for a large class of pMDP problems.
We list the two main technical results of this paper:
* We relax nonconvex constraints in SGPs and apply a simple transformation to the parameter functions. The resulting programs are geometric programs (GPs) <cit.>, a class of convex programs. We show that a solution to the relaxed GP induces feasibility (satisfaction of all specifications) in the original pMDP problem. Note that solving GPs is polynomial in the number of variables.
* Given an initial feasible solution, we use a technique called sequential convex programming <cit.> to improve a signomial objective. This local optimization method for nonconvex problems leverages convex optimization by solving a sequence of convex approximations (GPs) of the original SGP.
Sequential convex programming is known to efficiently find a feasible solution with
good, though not necessarily globally optimal, objective values <cit.>.
We initialize the sequence with a feasible solution (obtained from the GP) of the original problem and compute a trust region. Inside this region, the optimal value of the approximation of the SGP is at least as good as the objective value at the feasible solution of the GP.
The optimal solution of the approximation is then the initial point of the next iteration with a new trust region.
This procedure is iterated to approximate a local optimum of the original problem.
Utilizing our results, we discuss the concrete problems of parameter synthesis and model repair for multiple specifications for pMDPs.
Experimental results with a prototype implementation show the applicability of our optimization methods to benchmarks of up to 10^5 states.
As solving GPs is polynomial in the number of variables, our approaches are relatively insensitive to the number of parameters in pMDPs. This is an improvement over state-of-the-art approaches that leverage SMT, which—for our class of problems—scale exponentially in variables and the degree of polynomials.
This is substantiated by our experiments.
Related work.
Several approaches exist for pMCs <cit.> while the number of approaches for pMDPs <cit.> is limited.
Ceska et al. <cit.> synthesize rate parameters in stochastic biochemical networks.
Multi-objective model checking of non-parametric MDPs <cit.> is a convex problem <cit.>.
Bortolussi et al. <cit.> developed a Bayesian statistical algorithm for properties on stochastic population models.
Convex uncertainties in MDPs without parameter dependencies are discussed in <cit.>.
Parametric probabilistic models are used to rank patches in the repair of software <cit.> and to compute perturbation bounds <cit.>.
§ PRELIMINARIES
A probability distribution over a finite or countably infinite set is a function → with ∑_∈()=1.
The set of all distributions on is denoted by ().
Let V={x_1,…,x_n} be a finite set of strictly positive real-valued variables.
A monomial over V is an expression of the form
=c· x_1^a_1⋯ x_n^a_n ,
where c∈_>0 is a positive coefficient, and a_i∈ are exponents for 1≤ i≤ n.
A posynomial over V is a sum of one or more monomials:
=∑_k=1^K c_k· x_1^a_1k⋯ x_n^a_nk .
If c_k is allowed to be a negative real number for any 1≤ k≤ K, then the expression (<ref>) is a signomial. The sets of all monomials, posynomials, and signomials over V are denoted by [V], [V], and [V], respectively.
This definition of monomials differs from the standard algebraic definition where exponents are positive integers with no restriction on the coefficient sign. A sum of monomials is then called a polynomial. Our definitions are consistent with <cit.>.
For a set of real-valued variables V, a valuation u over V is a function u V →.
The set of all valuations over V is ^V.
Applying valuation u to monomial over V yields a real number [u]∈ by replacing each occurrence of variables x∈ V in by u(x); the procedure is analogous for posynomials and signomials using standard arithmetic operations.
A parametric Markov decision process (pMDP) is a tuple with a finite set S of states, an initial state ∈ S, a finite set of actions, a finite set of real-valued variables , and a transition function S ×× S →[] satisfying for all s∈ S(s) ≠∅, where (s) = {∈|∃ s'∈ S. (s, , s') ≠ 0}.
If for all s∈ S it holds that |(s)| = 1, is called a parametric discrete-time Markov chain (pMC)
.
(s) is the set of enabled actions at state s; as (s) ≠∅, there are no deadlock states.
Costs are defined using a state–action cost function S ×→_≥ 0.
Largely due to algorithmic reasons, the transition probabilities in the literature <cit.> are polynomials or rational functions, , fractions of polynomials.
Our restriction to signomials is realistic; all benchmarks from the PARAM–webpage <cit.> contain only signomial transition probabilities.
A pMDP is a Markov decision process (MDP) if the transition function is a valid probability distribution, , S ×× S → [0,1] and ∑_s'∈ S(s,,s') = 1 for all s ∈ S ∈(s).
Analogously, a Markov chain (MC) is a special class of a pMC; a model is parameter-free if all probabilities are constant. Applying a valuation u to a pMDP, denoted [u], replaces each signomial f in by f[u]; we call [u] the instantiation of at u.
The application of u is to replace the transition function f by the probability f[u].
A valuation u is well-defined for if the replacement yields probability distributions at all states; the resulting model [u] is an MDP or an MC.
[pMC]
Consider a variant of the Knuth–Yao model of a die <cit.>, where a six-sided die is simulated by successive coin flips.
We alternate flipping two biased coins, which result in heads with probabilities defined by the monomials p and q, respectively. Consequently, the probability for tails is given by the signomials 1-p and 1-q, respectively. The corresponding pMC is depicted in Fig. <ref>; and the instantiated MC for p = 0.4 and q = 0.7 is given in Fig. <ref>. Note that we omit actions, as the model is deterministic.
In order to define a probability measure and expected cost on MDPs, nondeterministic choices are resolved by so-called schedulers.
For practical reasons we restrict ourselves to memoryless schedulers; details can be found in <cit.>.
A (randomized) scheduler for an MDP is a function S→() such that σ(s)(α) > 0 implies α∈(s).
The set of all schedulers over is denoted by ^.
Applying a scheduler to an MDP yields a so-called induced Markov chain.
Let MDP and scheduler ∈^. The MC induced by and is ^=(S,,,^) where for all s,s'∈ S,
^(s,s')=∑_α∈(s)(s)()·(s,α,s').
We consider reachability properties and expected cost properties.
For MC with states S, let sT denote the probability of reaching a set of target states T ⊆ S from state s∈ S; simply [] denotes the probability for initial state .
We use the standard probability measure as in <cit.>.
For threshold λ∈ [0,1], the reachability property asserting that a target state is to be reached with probability at most λ is denoted =.
The property is satisfied by , written , iff []≤λ.
The cost of a path through MC until a set of goal states G⊆ S is the sum of action costs visited along the path. The expected cost of a finite path is the product of its probability and its cost.
For G = 1, the expected cost of reaching G is the sum of expected costs of all paths leading to G.
An expected cost property κG is satisfied if the expected cost of reaching T is bounded by a threshold κ∈.
Formal definitions are given in e.g., <cit.>.
If multiple specifications φ_1,…,φ_q are given, which are either reachability properties or expected cost properties of the aforementioned forms, we write the satisfaction of all specifications φ_1,…,φ_q for an MC as φ_1…φ_q.
An MDP satisfies the specifications φ_1,…,φ_q, iff for all schedulers ∈^ it holds that ^φ_1∧…∧φ_q. The verification of multiple specifications is also referred to as multi-objective model checking <cit.>.
We are also interested in the so-called scheduler synthesis problem, where the aim is to find a scheduler such that the specifications are satisfied (although other schedulers may not satisfy the specifications).
§ NONLINEAR PROGRAMMING FOR PMDPS
In this section we formally state a general pMDP parameter synthesis problem and describe how
it can be formulated using nonlinear programming.
§.§ Formal problem statement
-4
Given a pMDP , specifications
φ_1,…,φ_q that are either probabilistic reachability
properties or expected cost properties, and an objective function
f→ over the variables V, compute a well-defined
valuation u∈^V for , and a (randomized) scheduler ∈^
such that the following conditions hold:
* Feasibility:
the Markov chain ^[u] induced by scheduler and
instantiated by valuation u satisfies the specifications, , ^[u]φ_1 ∧…∧φ_q.
* Optimality:
the objective f is minimized.
Intuitively, we wish to compute a parameter valuation and a scheduler such
that all specifications are satisfied, and the objective is globally minimized.
We refer to a valuation–scheduler pair (u,) that satisfies
condition (<ref>), , only guarantees satisfaction of the
specifications but does not necessarily minimize the objective f, as a
feasible solution to the pMDP synthesis problem. If both
(<ref>) and (<ref>) are satisfied, the pair
is an optimal solution to the pMDP synthesis problem.
§.§ Nonlinear encoding
We now provide an NLP encoding of Problem <ref>. A general NLP over
a set of real-valued variables can be written as
minimize f
subject to
∀ i. 1≤ i≤ m g_i ≤ 0,
∀ j. 1≤ i≤ p h_j = 0,
where f, g_i, and h_j are arbitrary functions over , and m and p are the
number of inequality and equality constraints of the program respectively. Tools like
IPOPT <cit.> solve small instances of such problems.
Consider a
pMDP with specifications φ_1= and φ_2=κG. We will discuss how additional specifications of either type can be encoded.
The set = ∪ W of variables of the NLP consists of
the variables that occur in the pMDP as well as a set W of additional variables:
* {^s,α| s ∈ S, ∈(s) },
which define the randomized scheduler by (s)()=^s,.
* { p_s | s ∈ S },
where p_s is the probability of reaching the target set
T⊆ S from state s under scheduler , and
* { c_s | s ∈ S }, where c_s is the expected cost to reach G⊆ S from s under .
A valuation over consists of a valuation u∈^V over the
pMDP variables and a valuation w∈^W over the additional variables.
f
p_≤λ,
c_≤κ,
∀ s∈ S. ∑_∈(s)^s,=1,
∀ s∈ S ∀∈(s). 0 ≤^s,≤ 1,
∀ s∈ S ∀∈(s). ∑_s'∈ S(s,,s')=1,
∀ s, s'∈ S ∀∈(s). 0 ≤(s,,s') ≤ 1,
∀ s∈ T. p_s=1,
∀ s∈ S∖ T. p_s=∑_∈(s)σ^s,·∑_s'∈ S(s,,s')· p_s',
∀ s∈ G. c_s=0,
∀ s∈ S∖ G. c_s= ∑_∈(s)σ^s,·(c(s,) + ∑_s'∈ S (s,,s') · c_s').
The NLP (<ref>)–(<ref>) encodes Problem <ref> in the following way.
The objective function f in (<ref>) is any real-valued function over the variables .
The constraints (<ref>) and (<ref>) encode the
specifications φ_1 and φ_2, respectively.
The constraints (<ref>)–(<ref>)
ensure that the scheduler obtained is well-defined by requiring that the
scheduler variables at each state sum to unity.
Similarly, the constraints
(<ref>)–(<ref>) ensure that
for all states, parameters from are instantiated such that
probabilities sum up to one.
(These constraints are included if not all probabilities at a state are constant.)
The probability of reaching the target for all states in the target set is
set to one using (<ref>).
The reachability probabilities in each state
depend on the reachability of the successor states and the transition
probabilities to those states through (<ref>).
Analogously to the reachability probabilities, the cost for each goal state G⊆ S
must be zero, thereby precluding the collection of infinite cost at
absorbing states, as enforced by (<ref>).
Finally, the expected cost for all states except target states is given by
the equation (<ref>), where according to the
strategy the cost of each action is added to the expected cost of
the successors.
We can readily extend the NLP to include more specifications. If
another reachability property φ'=λ'T' is given, we add the set of probability variables { p'_s |
s ∈ S} to W, and duplicate the
constraints (<ref>)–(<ref>) accordingly.
To ensure satisfaction of φ', we also add the constraint
p'_≤λ'.
The procedure is similar for additional expected cost properties.
By construction, we have the following result relating the NLP encoding and Problem <ref>.
The NLP (<ref>)–(<ref>) is sound and complete with respect to Problem <ref>.
We refer to soundness in the sense that each variable assignment that satisfies
the constraints induces a scheduler and a valuation of parameters such that a
feasible solution of the problem is induced. Moreover, any optimal solution to
the NLP induces an optimal solution of the problem. Completeness means that all
possible solutions of the problem can be encoded by this NLP; while
unsatisfiability means that no such solution exists, making the problem
infeasible.
Signomial programs. By Def. <ref> and <ref>, all constraints in the NLP consist of signomial functions.
A special class of NLPs known as signomial programs (SGPs) is of the form (<ref>)–(<ref>) where f, g_i and h_j are signomials over , see Def. <ref>. Therefore, we observe that the NLP (<ref>)–(<ref>) is an SGP. We will refer to the NLP as an SGP in what follows.
SGPs with equality constraints consisting of functions that are not affine are not convex in general.
In particular, the SGP (<ref>)–(<ref>) is not necessarily convex. Consider a simple pMC only having transition probabilities of the form p and 1-p, as in Example <ref>. The function in the equality
constraint (<ref>) of the corresponding SGP encoding is not affine in
parameter p and the probability variable p_s for some state s∈ S.
More generally, the equality constraints
(<ref>),
(<ref>), and
(<ref>)
involving are not necessarily affine, and thus the SGP may not be a convex program <cit.>.
Whereas for convex programs global optimal solutions can be
found efficiently <cit.>, such guarantees are
not given for SGPs.
However, we can efficiently obtain local optimal solutions for SGPs in our setting, as shown in the following sections.
§ CONVEXIFICATION
We investigate how to transform the SGP (<ref>)–(<ref>) into a convex program by relaxing equality constraints and a lifting of variables of the SGP.
A certain subclass of SGPs called geometric programs (GPs) can be transformed into convex programs <cit.> and solved efficiently.
A GP is an SGP of the form (<ref>)–(<ref>) where f,g_i∈[] and h_j∈[].
We will refer to a constraint with posynomial or monomial function as a posynomial or monomial constraint, respectively.
§.§ Transformation and relaxation of equality constraints
As discussed before, the SGP (<ref>)–(<ref>) is not convex because of the presence of non-affine equality constraints.
First observe the following transformation <cit.>:
f≤ h⟺f/h≤ 1,
for f∈[] and h∈ []. Note that monomials are strictly positive (Def. <ref>). This (division-)transformation of f≤ h yields a posynomial inequality constraint.
We relax all equality constraints of SGP (<ref>)–(<ref>) that are not monomials to inequalities, then we apply the division-transformation wherever possible. Constraints (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) are transformed to
p_/λ≤ 1,
c_/κ≤ 1,
∀ s∈ S. ∑_∈(s)^s,≤ 1,
∀ s∈ S ∀∈(s). ∑_s'∈ S(s,,s')≤ 1,
∀ s∈ S∖ T. ∑_∈(s)σ^s,·∑_s'∈ S(s,,s')· p_s'/p_s≤ 1,
∀ s∈ S∖ G. ∑_∈(s)σ^s,·(c(s,) + ∑_s'∈ S (s,,s') · c_s')/c_s≤ 1 .
These constraints are not necessarily posynomial inequality constraints because (as in Def. <ref>) we allow signomial expressions in the transition probability function . Therefore, replacing (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), and (<ref>) in the SGP with (<ref>)–(<ref>) does not by itself convert the SGP to a GP.
§.§ Convexification by lifting
The relaxed equality constraints (<ref>)–(<ref>) involving are signomial, rather than posynomial, because the parameters enter Problem <ref> in signomial form. Specifically, consider the relaxed equality constraint (<ref>) at s_0 in Example <ref>,
p· p_s_1 + (1-p)· p_s_2/p_s_0≤ 1.
The term (1-p)· p_s_2 is signomial in p and p_s_2. We lift by introducing a new variable p̅ = 1-p, and rewrite (<ref>) as a posynomial inequality constraint and an equality constraint in the lifted variables:
p· p_s_1 + p̅· p_s_2/p_s_0≤ 1,
p̅ = 1-p.
We relax the (non-monomial) equality constraint to p + p̅≤ 1.
More generally, we restrict the way parameters occur in as follows. Refer to Fig. <ref>. For every state s∈ S and every action ∈(s) we require that there exists at most one state s̅∈ S such that (s,,s̅)∈[] and (s,,s')∈[] for all s'∈ S∖{s̅}. In particular, we require that
(s,,s̅)_∈[] = 1 - ∑_s'∈ S∖{s̅}(s,,s')_∈[] .
This requirement is met by all benchmarks available at the PARAM–webpage <cit.>.
In general, we lift by introducing a new variable p̅_s,,s̅=(s,,s̅) for each such state s∈ S; refer to Fig. <ref>.
We denote this set of lifting variables by L. Lifting as explained above then creates a new transition probability function where for every s,s'∈ S and ∈ we have (s,,s')∈[∪ L].
We call the set of constraints obtained through transformation, relaxation, and lifting of every constraint of the SGP (<ref>)–(<ref>) as shown above the convexified constraints.
Any posynomial objective subject to the convexified constraints forms by construction a GP over the pMDP parameters , the SGP additional variables W, and the lifting variables L.
§.§ Tightening the constraints
A solution of the GP as obtained in the previous section does not have a direct relation to the original SGP (<ref>)–(<ref>).
In particular, a solution to the GP may not have the relaxed constraints satisfied with equality.
For (<ref>) and (<ref>), the induced parameter valuation and the scheduler are not well-defined, , the probabilities may not sum to one. In particular, this makes it trivial to fulfill the GP, just assign 0 to all schedulers
We need to relate the relaxed and lifted GP to Problem <ref>. By defining a regularization function F over all parameter and scheduler variables, we ensure that the constraints are satisfied with equality; enforcing well-defined probability distributions.
F = ∑_p∈1/p + ∑_p̅∈ L1/p̅ + ∑_s∈ S,∈(s)1/σ_s, .
The function F is monotone in all its variables. We discard the original objective f in (<ref>) and form a GP with the regularization objective F (<ref>):
minimize F
subject to
p_/λ≤ 1,
c_/κ≤ 1,
∀ s∈ S. ∑_∈(s)^s,≤ 1,
∀ s∈ S ∀∈(s). ^s,≤ 1 ,
∀ s∈ S ∀∈(s). ∑_s'∈ S(s,,s')≤ 1,
∀ s, s'∈ S ∀∈(s). (s,,s') ≤ 1,
∀ s∈ T. p_s=1,
∀ s∈ S∖ T. ∑_∈(s)σ^s,·∑_s'∈ S(s,,s')· p_s'/p_s≤ 1,
∀ s∈ S∖ G. ∑_∈(s)σ^s,·(c(s,) + ∑_s'∈ S(s,,s') · c_s')/c_s≤ 1 .
Since the objective F (<ref>) and the inequality constraints (<ref>) and (<ref>) are monotone in , L, and the scheduler variables, each optimal solution for a feasible problem satisfies them with equality. We obtain a well-defined scheduler and a valuation u as in Problem 1. Note that variables from (<ref>) are explicitly excluded from the GP by treating them as constants.
The reachability probability constraints (<ref>) and cost constraints (<ref>) need not be satisfied with equality. However, (<ref>) is equivalent to
p_s≥∑_∈(s)σ^s,·∑_s'∈ S(s,,s')· p_s'
for all s∈ S∖ T and ∈.
The probability variables p_s are assigned upper bounds on the actual probability to reach the target states T under scheduler and valuation u. Put differently, the p_s variables cannot be assigned values that are lower than the actual probability; ensuring that and u induce satisfaction of the specification given by (<ref>) if the problem is feasible and and u are well-defined. An analogous reasoning applies to the expected cost computation (<ref>).
A solution consisting of a scheduler or valuation that are not well-defined occurs only if Problem <ref> itself is infeasible. Identifying that such a solution has been obtained is easy.
These facts allow us to state the main result of this section.
A solution of the GP (<ref>)–(<ref>) inducing well-defined scheduler and valuation u is a feasible solution to Problem 1.
Note that the actual probabilities induced by and u for the given pMDP are given by the MC ^[u] induced by and instantiated by u.
Since all variables are implicitly positive in a GP, no transition probability function will be instantiated to probability zero.
The case of a scheduler variable being zero to induce the optimum can be excluded by a previous graph analysis.
§ SEQUENTIAL GEOMETRIC PROGRAMMING
We showed how to efficiently obtain a feasible solution
for Problem <ref> by solving GP (<ref>)–(<ref>).
We propose a sequential convex programming trust-region method to compute a local optimum of the SGP (<ref>)–(<ref>), following <cit.>, solving a sequence of GPs. We obtain each GP by replacing signomial functions in equality constraints of the SGP (<ref>)–(<ref>) with monomial approximations of the functions.
Given a posynomial f ∈[], variables = {x_1,…,x_n}, and a valuation u ∈^, a monomial approximation f̂∈[] for f near u is
∀ i. 1 ≤ i ≤ n
f̂ = f[u]
∏_i=1^n(x_iu(x_i))^a_i,
where
a_i=u(x_i)f[u]∂ f∂ x_i[u].
Intuitively, we compute a linearization f̂ of f∈[] around a fixed valuation u.
We enforce the fidelity of monomial approximation f̂ of f ∈[] by restricting valuations to remain within a set known as trust region. We define the following constraints on the variables with t > 1 determining the size of the trust region:
∀ i. 1 ≤ i ≤ n
(1/t)· u(x_i) ≤ x_i ≤ t· u(x_i)
For a given valuation u, we approximate the SGP (<ref>)–(<ref>) to obtain a local GP as follows.
First, we apply a lifting procedure (Section <ref>) to the SGP ensuring that all constraints consist of posynomial functions. The thus obtained posynomial inequality constraints are included in the local GP.
After replacing posynomials in every equality constraint by their monomial approximations near u, the resulting monomial equality constraints are also included.
Finally, we add trust region constraints (<ref>) for scheduler and parameter variables. The objective function is the same as for the SGP.
The optimal solution of the local GP is not necessarily a feasible solution to the SGP.
Therefore, we first normalize the scheduler and parameter values to obtain well-defined probability distributions. These normalized values are used to compute precise probabilities and expected cost using PRISM. The steps above provide a feasible solution of the SGP.
We use such approximations to obtain a sequence of feasible solutions to the SGP approaching a local optimum of the SGP.
First, we compute a feasible solution u^(0) for Problem <ref> (Section <ref>),
forming the initial point of a sequence of solutions u^(0),…, u^(N), N ∈.
The solution u^(k) for 0≤ k≤ N is obtained from a local GP defined using u^(k-1) as explained above.
The parameter t for each iteration k is determined based on its value for the previous iteration, and the ratio of f[u^(k-1)] to f[u^(k-2)], where f is the objective function in (<ref>). The iterations are stopped when | f[u^(k)] - f[u^(k-1)] | < ϵ. Intuitively, ϵ defines the required improvement on the objective value for each iteration; once there is not enough improvement the process terminates.
§ APPLICATIONS
We discuss two applications and their restrictions for the general SGP (<ref>)–(<ref>).
Model repair.
For MC and specification φ with φ, the model repair problem <cit.> is to transform to ' such that 'φ. The transformation involves a change of transition probabilities. Additionally, a cost function measures the change of probabilities. The natural underlying model is a pMC where parameters are added to probabilities. The cost function is minimized subject to constraints that induce satisfaction of φ. In <cit.>, the problem is given as NLP. Heuristic <cit.> and simulation-based methods <cit.> (for MDPs) were presented.
Leveraging our results, one can readily encode model repair problems for MDPs, multiple objectives, and restrictions on probability or cost changes directly as NLPs. The encoding as in <cit.> is handled by our method in Section <ref> as it involves signomial constraints. We now propose a more efficient approach, which encodes the change of probabilities using monomial functions.
Consider an MDP and specifications φ_1,…,φ_q with φ_1…φ_q. For each probability (s,,s')=a∈ that may be changed, introduce a parameter p, forming the parameter set . We define a parametric transition probability function by '(s,,s')=p· a∈[V]. The quadratic cost function is for instance f=∑_p∈ V p^2∈[V].
By minimizing the sum of squares of the parameters (with some regularization), the change of probabilities is minimized.
By incorporating these modifications into SGP (<ref>)–(<ref>), our approach is directly applicable. Either we restrict the cost function f to an upper bound, and efficiently solve a feasibility problem (Section <ref>), or we compute a local minimum of the cost function (Section <ref>).
In contrast to <cit.>, our approach works for MDPs and has an efficient solution. While <cit.> uses fast simulation techniques, we can directly incorporate multiple objectives and restrictions on the results while offering an efficient numerical solution of the problem.
Parameter space partitioning.
For pMDPs,
tools like PRISM <cit.> or PROPhESY <cit.> aim at partitioning the parameter space into regions with respect to a specification.
A parameter region is given by a convex polytope defined by linear inequalities over the parameters, restricting valuations to a region. Now, for pMDP a region is safe regarding a specification φ, if no valuation u inside this region and no scheduler induce ^[u]φ. Vice versa, a region is unsafe, if there is no valuation and scheduler such that the specification is satisfied.
In <cit.>, this certification is performed using SMT solving.
More efficiency is achieved by using an approximation method <cit.>.
Certifying regions to be unsafe is directly possible using our approach.
Assume pMDP , specifications φ_1,…,φ_q, and a region candidate defined by a set of linear inequalities.
We incorporate the inequalities in the NLP (<ref>)–(<ref>). If the feasibility problem (Section <ref>) has no solution, the region is unsafe. This yields the first efficient numerical method for this problem of which we are aware.
Proving that a region is safe is more involved. Given one specification φ=, we maximize the probability to reach T. If this probability is at most λ, the region is safe. For using our method from Section <ref>, one needs domain specific knowledge to show that a local optimum is a global optimum.
§ EXPERIMENTS
We implemented a prototype using the Python interfaces of the probabilistic model checker STORM <cit.> and the optimization solver MOSEK <cit.>.
All experiments were run on a 2.6 GHz machine with 32 GB RAM.
We used PRISM <cit.> to correct approximation errors as explained before.
We evaluated our approaches using mainly examples from the PARAM–webpage <cit.> and from PRISM <cit.>.
We considered several parametric instances of the Bounded Retransmission Protocol (BRP) <cit.>, NAND Multiplexing <cit.>, and the Consensus protocol (CONS) <cit.>. For BRP, we have a pMC and a pMDP version, NAND is a pMC, and CONS is a pMDP.
For obtaining feasibility solutions, we compare to the SMT solver Z3 <cit.>. For additional optimality criteria, there is no comparison to another tool possible as IPOPT <cit.> already fails for the smallest instances we consider.
Fig. <ref> states for each benchmark instance the number of states (#states) and the number of parameters (#par). We defined two specifications consisting of a expected cost property () and a reachability property (). For some benchmarks, we also maximized the probability to reach a set of “good states” (*). We list the times taken by MOSEK; for optimality problems we also list the times PRISM took to compute precise probabilities or costs (Section <ref>). For feasibility problems we list the times of Z3. The timeout (TO) is 90 minutes.
We observe that both for feasibility with optimality criteria we can handle most benchmarks of up to 10^5 states within the timeout, while we ran into a timeout for CONS. The number of iterations N in the sequential convex programming is less than 12 for all benchmarks with ϵ=10^-3.
As expected, simply solving feasibility problems is faster by at least one order of magnitude. Raising the number of parameters from 2 to 4 for BRP does not cause a major performance hit, contrary to existing tools. For all benchmarks except NAND, Z3 only delivered results for the smallest instances within the timeout.
To demonstrate the insensitivity of our approach to the number of parameters, we considered a pMC of rolling multiple Knuth–Yao dice with 156 states, 522 transitions and considered instances with up to 8 different parameters. The timeout is 100 seconds.
In Fig. <ref> we compare our encoding in MOSEK for this benchmark to the mere computation of a rational function using PROPhESY <cit.> and again to Z3. PROPhESY already runs into a timeout for 4 parameters[Due to the costly computation of greatest common divisors employed in .].
Z3 needs around 15 seconds for most of the tests. Using GPs with MOSEK proves far more efficient as it needs less than one second for all instances.
In addition, we test model repair (Section <ref>) on a BRP instance with 17415 states for φ=0.9T. The initial parameter instantiation violates φ. We performed model repair towards satisfaction of φ. The probability of reaching T results in 0.79 and the associated cost is 0.013. The computation time is 21.93 seconds. We compare our result to an implementation of <cit.>, where the probability of reaching T is 0.58 and the associated cost is 0.064. However, the time for the simulation-based method is only 2.4 seconds, highlighting the expected trade-off between optimality and computation times for the two methods.
Finally, we encode model repair for the small pMC from Example <ref> in IPOPT, see <cit.>. For ψ=0.125T where T represents the outcome of the die being 2, the initial instantiation induces probability 1/6. With our method, the probability of satisfying ψ is 0.1248 and the cost is 0.0128. With IPOPT, the probability is 0.125 with cost 0.1025, showing that our result is nearly optimal.
§ CONCLUSION AND FUTURE WORK
We presented a way to use convex optimization in the field of parameter synthesis for parametric Markov models. Using our results, many NLP encodings of related problems now have a direct and efficient solution.
Future work will concern the integration of these methods into mature tools like PRISM or PROPhESY to enable large-scale benchmarking by state space reduction techniques and advanced data structures. Moreover, we will explore extensions to richer models like continuous-time Markov chains <cit.>.
plainyr
|
http://arxiv.org/abs/1701.07863v2 | 20170126201352 | Antiferromagnetic textures and dynamics on the surface of a heavy metal | [
"Ricardo Zarzuela",
"Yaroslav Tserkovnyak"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall"
] |
prsty
Department of Physics and Astronomy, University of California, Los Angeles, California 90095, USA
We investigate the formation and dynamics of spin textures in antiferromagnetic insulators adjacent to a heavy-metal substrate with strong spin-orbit interactions. Exchange coupling to conduction electrons engenders an effective anisotropy, Dzyaloshinskii-Moriya interactions, and a magnetoelectric effect for the Néel order, which can conspire to produce nontrivial antiferromagnetic textures. Current-driven spin transfer enabled by the heavy metal, furthermore, triggers ultrafast (THz) oscillations of the Néel order for dc currents exceeding a critical threshold, opening up the possibility of Terahertz spin-torque self-oscillators. For a commonly invoked antidamping-torque geometry, however, the instability current scales with the energy gap of the antiferromagnetic insulator and, therefore, may be challenging to reach experimentally. We propose an alternative generic geometry for inducing ultrafast autonomous antiferromagnetic dynamics.
Antiferromagnetic textures and dynamics on the surface of a heavy metal
Ricardo Zarzuela and Yaroslav Tserkovnyak
December 30, 2023
=======================================================================
Introduction.|Antiferromagnetic spin textures produce minimal stray fields, are robust against electromagnetic perturbations, and display ultrafast spin dynamics, three features that make them attractive as potential active elements in next-generation spin-transport and memory-storage devices.<cit.> Recent years have witnessed a growing interest in the inherently antiferromagnetic (spin) transport properties.<cit.> However, the Néel order is relatively hidden from electromagnetic fields and, therefore, generally not easy to drive or read out. In this regard, spin-transfer torques are well suited to trigger antiferromagnetic excitations<cit.> and may be as effective for this purpose as in ferromagnets.<cit.> It appears particularly attractive to manipulate the staggered order parameter through the spin Hall effect. In the usual antidamping-torque geometry,<cit.> however, the effective spin accumulation induced by the spin Hall effect must overcome the large gap in the antiferromagnetic spectrum (typically in the range of THz for common materials), translating into prohibitively large charge currents. Therefore, further insights concerning the antiferromagnetic equilibrium states and their spin Hall induced dynamics, in the presence of strong spin-orbit interactions, are desired for further progress.
In this Rapid Communication, we construct a phenomenological theory for antiferromagnetic insulators subjected to spin exchange and spin-orbit coupling with an adjacent heavy metal. We focus on energy terms that favor spin textures, with an eye on nontrivial topologies. Furthermore, we study the Néel order driven out of equilibrium by spin-transfer torques and find the thresholds for current-driven magnetic instabilities for several scenarios, classifying the ensuing nonlinear dynamics. Our primary interest here is in ultrafast self-oscillations of a uniform staggered order, which can be sustained by feasible charge currents (as in, e.g., the ferromagnetic counterparts).
Effective theory.|We regard the heterostructure as a quasi-two-dimensional (2D) system along the xy plane, which we take to be isotropic at the coarse-grained level, see Fig. <ref>. The reflection symmetry along the z axis is structurally broken and the time-reversal symmetry is also broken due to the existence of the Néel phase. We focus on bipartite antiferromagnetic insulators, where the two spin sublattices can be transformed into each other by a space-group symmetry of the crystal, and we restrict ourselves to smooth and slowly-varying spin textures. An effective long-wavelength theory for this class of antiferromagnets can be developed in terms of two continuum coarse-grained fields: the (staggered) Néel field l and the normalized spin density m.<cit.> These fields satisfy the nonlinear local constraints l^2≡1, l·m≡0, and the presence of a well-developed Néel order implies (at reasonable fields) |m|≪1. The corresponding (2D) Lagrangian density in the continuum limit becomes
ℒ_AFM[t;l,m] =ℒ_kin[t;l,m]-ℱ_AFM[l,m],
ℱ_AFM[l,m] =m^2/2χ-m·ℬ+ℱ_stag[l],
to quadratic order in both l and m, where ℱ_AFM[l,m] denotes the free-energy density of the antiferromagnet, χ is the (transverse) spin susceptibility, ℬ=γ sB represents the normalized magnetic field, and s is the saturated (2D) spin density.<cit.> The kinetic (Berry-phase) Lagrangian
ℒ_kin[t;l,m]=s m·(l×∂_tl)
establishes the canonical conjugacy between l and sm×l. The functional ℱ_stag[l] stands for the exchange and anisotropy contributions to the energy of the antiferromagnet. In the case of isotropic exchange and uniaxial anisotropy, we have ℱ_stag[l]=A/2∑_μ=1,2(∂_x_μl)^2-1/2K l_z^2, where A, K are the stiffness and anisotropy constants, respectively. K<0 (K>0) describes easy (hard) xy plane. Both A and χ^-1 are proportional to JS^2, with J being the microscopic exchange energy.
Phenomenologically, the exchange coupling of the Néel order to conduction electrons of the heavy-metal substrate yields the following contributions to the effective energy of the combined system:
ℱ_int[l]=-K^'/2l_z^2-L_1 l·E+L_2/2[l·∇ l_z-l_z∇·l ],
where K^', L_1, and L_2 are material-dependent phenomenological coefficients and E is the static (in-plane) electric field acting on electrons (here, in equilibrium). Notice that, according to the time-reversal symmetry, L_1 (L_2) must be an odd (even) function of the out-of-plane component l_z of the Néel order. The first two terms in Eq. (<ref>) account for an effective axial anisotropy and a magnetoelectric effect for the Néel order, respectively. The last term can arise due to structural reflection-symmetry breaking at the interface,<cit.> and describes an inhomogeneous Dzyaloshinskii-Moriya interaction.<cit.>
In the absence of the electromagnetic fields, E, B→0, the above free energy for the Néel order reproduces that of a ferromagnetic film with the broken reflection symmetry with respect to the basal plane.<cit.> In particular, a spiral ground state would arise for values of the parameter L_2/√(AK_eff) exceeding the critical value 4/π, where K_eff≡ K+K^' is the effective anisotropy constant.<cit.> As a specific illustrative example, in the Supplemental Material<cit.> we complement our effective theory with microscopic results for the case of a strong three-dimensional topological insulator (TI) as a heavy (semi)metal.
Nonequilibrium dynamics.|Undamped Landau-Lifshitz dynamics of the insulating antiferromagnet are described by the Euler-Lagrange equations for the total Lagrangian ℒ_AFM-ℱ_int, subject to the local constraints l ^2≡1 and l·m≡0.<cit.> A phenomenological approach well suited to incorporate dissipation into these equations considers a Gilbert-Rayleigh function,<cit.> whose dominant term is given by 1/2sα_ijl̇_il̇_j in the low-frequency (compared to the microscopic exchange J) regime,<cit.> where α̂ denotes the Gilbert-damping tensor.<cit.> The resulting Landau-Lifshitz-Gilbert-type equations read
sl̇ =χ^-1m×l+l×ℬ+τ_l,
s(ṁ+l×α̂l̇) =δ_lℱ_eff×l+m×ℬ+τ_m,
where ℱ_eff≡ℱ_stag+ℱ_int is the effective energy and the spin-transfer torques τ_l,τ_m account for the additional, nonequilibrium electric current-induced spin transport across the interface.
As usual, we can obtain a dynamical equation for the Néel order alone by solving for m according to Eq. (<ref>) and substituting it in Eq. (<ref>). Notice that the effect of the torque τ_l is in general reduced relative to τ_m by the small parameters ħω/J (with ω denoting the characteristic frequency of the antiferromagnetic excitations), which, again,<cit.> is rooted in the smallness of the susceptibility χ∝ J^-1. In the spirit of our low-frequency long-wavelength treatment, we, therefore, disregard this spin-transfer torque (i.e., τ_l) in what follows.
The spin torque τ_m has two (dissipative) components: the first, so-called spin-orbit torque, is rooted phenomenologically in the spin-Hall effect.<cit.> The second is the texture-induced spin-transfer torque,<cit.> which originates in the spin mistracking of conduction electrons (of the heavy metal) propagating in proximity to the Néel texture.<cit.> According to the structural symmetries of the heterostructure, they have the form:<cit.>
τ_m=ϑ_2l×(ê_z×j)×l+ϑ_3l×(j·∇)l,
where the coupling constants ϑ_2 and ϑ_3 depend on the interplay of spin-orbit and spin-transfer physics at the interface.
Another contribution to the net transfer of spin angular momentum onto the antiferromagnetic texture is provided by the spin-pumping mechanism,<cit.> which can be absorbed into the (effective) damping tensor as an interface term, α̂_eff≡α̂+ϑ_1/s.<cit.> Here, ϑ_1 is the (dissipative) spin-pumping parameter (taken, for simplicity, to be isotropic) related to the spin-mixing conductance of the interface. The combination of Eqs. (<ref>)-(<ref>) yields the following second-order differential equation for the Néel order:<cit.>
l ×[s^2χl̈+sα̂_effl̇+δ_lℱ_eff+χ(l·ℬ)ℬ-sχl×ℬ̇
-ϑ_2(ê_z×j)×l-ϑ_3(j·∇)l ]-2sχ(l·ℬ)l̇=0,
which is the central equation and one of the main results of this Rapid Communication. It is worth mentioning that in order to integrate this equation, it needs to be complemented with the trivial vector identity: l·l̈=(d/dt)^2l^2/2-l̇^2=-l̇^2, since l^2≡1.
Current-driven monodomain dynamics.|Magnetic fields may not optimally be suited to manipulate antiferromagnetic textures, as the staggered order suppresses the coarse-grained magnetization in the Néel phase. In this regard, spin-transfer torques offer an attractive alternative to trigger and control fast antiferromagnetic dynamics, of particular interest being current-induced magnetic instabilities and switching. Starting with the simplest out-of-equilibrium scenario, we consider a uniform state and, therefore, disregard the magnetic torques resulting from the magnetoelectric and inhomogeneous Dzyaloshinskii-Moriya terms in Eq. (<ref>), and the texture-induced spin-transfer torque in Eq. (<ref>). Furthermore, we suppose an easy-z-axis anisotropy (i.e., K_ eff>0), absence of an applied magnetic field, and a uniform dc charge-current density injected (without loss of generality) along the x direction, j=jê_x. Consequently, Eq. (<ref>) becomes
s^2χl̈+sα_effl̇ +(ϑ_2 j l_x-K_effl_z)ê_z-ϑ_2 j l_zê_x
+[s^2χl̇^2+K_effl_z^2]l=0,
where, for simplicity, we have taken the full damping tensor to be isotropic.
Stability of the above dynamical system can be analyzed in terms of the parameter λ=ϑ_2j/K_eff: in equilibrium (λ=0), the fixed points (FPs) are l_FP,1=±ê_z along with any xy-plane orientation of the Néel order. See Fig. <ref>(a). From a simple stability analysis,<cit.> we conclude that l_FP,1 are the only stable FPs, and, therefore, any slight out-of-plane perturbation would turn the staggered order from any initial xy-plane configuration to a normal direction. See Fig. <ref>(d). When the current is ramped up within the range 0<λ≤1/2, the set of FPs becomes discrete and reads
l_FP,i =±(√(2)λ/√(1-p_i√(1-4λ^2)),0,√(1-p_i√(1-4λ^2))/√(2)),
l_FP,3 =±ê_y,
where p_i=(-1)^i and i=1,2. Stability theory applied to this case indicates that the l_FP,1 are stable FPs whereas the l_FP,2(3) are unstable. See Fig. <ref>(b). Therefore, any slight perturbation acting on l_FP,3 will drive the staggered field into one of the fixed points l_FP,1. See Fig. <ref>(e). The limiting orientation of the Nééĺ order depends on the signs of the x and z components of the perturbation. The FPs of Eq. (<ref>) for λ>1/2 are l_FP,3 and unstable. This leads to the formation of an attractive limit cycle in the xz plane. See Figs. <ref>(c),(f). We thus conclude that the instability threshold of our dynamical system towards self-oscillations is determined by the critical current j_c=1/2ϑ_2K_eff. The frequency corresponding to this limit cycle is in the range of ω=1/2s√(K_eff/χ), which agrees with the values obtained from the numerical solution of the full Eq. (<ref>).<cit.> It is also instructive to consider a different geometry, in which an in-plane easy-axis anisotropy K is oriented along the y axis (i.e., perpendicular to the direction of the injected current). This is a typical antidamping-torque geometry.<cit.> Neglecting K', a precessional instability arises at the critical current j_c^⋆=α_eff/ϑ_2√(K/χ), and the corresponding antiferromagnetic dynamics have a characteristic frequency of ω^⋆≃1/s√(K/2χ).
The instability thresholds j_c and j_c^⋆ scale qualitatively differently with the system parameters, but both appear substantially higher than the typical ferromagnetic instability threshold of<cit.> ∼α_ eff/ϑ_2K if α_ eff≪1. Regarding j_c^⋆, we need to recognize that the quantity √(K/χ)∼√(KJ) setting the antiferromagnetic resonance frequency is typically much larger than the ferromagnetic resonance frequency, which is governed by K and unaffected by J, as the exchange (which is nonrelativistic) is generally stronger than the anisotropy (which is relativistic). Comparing j_c and j_c^⋆, we thus see that the former scales with the anisotropy, but is not reduced by the damping, as in the ferromagnetic case, while the latter scales with the exchange-enhanced resonance frequency. Note that the scaling of j_c^⋆ with the energy gap of the antiferromagnet is in agreement with the entropic argument given in Ref. Bender-PRL2012 (according to which, the effective spin accumulation induced by the spin Hall effect must overcome the magnon gap). The ratio j_c/j_c^⋆=√(Kχ)/2α_eff is governed by two small parameters: √(K/J) and α_eff. In the desirable limit of strong spin-orbit effects, and thus large α_eff and ϑ_2, as is the case, for example, in a magnetically-doped TI,<cit.> we may have j_c<j_c^⋆.
The phenomenological parameters of our effective theory can be evaluated in a simple diffusive model with weak spin-orbit interactions<cit.> as ϑ_1=ħ^2/2a_Mσ g^↑↓/hσ+2λ e^2g^↑↓(a_N/λ) and ϑ_2=θ_sħ e/a_Mλ g^↑↓tanh(a_N/2λ)/hσ+2λ e^2g^↑↓(a_N/λ), where a_M is the thickness of the antiferromagnetic layer, g^↑↓ is the spin-mixing conductance (per unit area) of the interface, and a_N, σ, λ, θ_s denote the thickness, conductivity, spin-diffusion length, and the bulk spin Hall angle of the heavy metal, respectively. These expressions coincide with the ferromagnetic case, subject to a generalized understanding of the spin-mixing conductance.<cit.> An appropriate engineering of the heterostructure (with strong spin-orbit coupling and thin magnetic layer), together with optimizing the switching geometry, are necessary to produce feasible values of the critical currents.
Discussion and outlook.|Dzyaloshinskii-Moriya interactions in this paper are endowed in the antiferromagnetic insulator by the interface.<cit.> We have already discussed how our effective theory incorporates an inhomogeneous Dzyaloshinskii-Moriya coupling in response to the proximity of a heavy-metal substrate, giving rise to magnetic superstructures.<cit.> Another possible manifestation of spin-orbit coupling in antiferromagnets is a weak ferromagnetism.<cit.> Whether it is compatible with the sublattice symmetry, {l→-l,m→m}, depends on the crystallographic structure of the antiferromagnet and its surface. In the Supplemental material<cit.> we illustrate two examples of quasi-2D crystal lattices for which a homogeneous Dzyaloshinskii-Moriya term ℱ_DM[l,m]=d·(l×m) is allowed, where d=dê_z is the Dzyaloshinskii vector along the normal to the interface. Addition of this term to the effective energy ℱ_eff results in a redefinition of the normalized magnetic field ℬ→ℬ+l×d in the equation (<ref>) for the free-energy density. It can be shown, however, that its effect on the antiferromagnetic dynamics at the level of Eq. (<ref>) can be absorbed by a small shift in the anisotropy constant: K^'→ K^'-χ d^2.
Self-oscillations, in the form of limit cycles, are sustained above the critical current j_c in the case of the easy-z-axis anisotropy. For the easy-y-axis geometry, the nature of the autonomous dynamics beyond the threshold j_c^⋆ was shown<cit.> to be sensitive to the details of the Gilbert-damping tensor, which can acquire, in particular, an anisotropic form ∝l×(l_z^2l̇+l̇_zê_z). The resultant self-oscillation frequencies belong to the THz range for typical insulating antiferromagnets. In order to realize spin-transfer THz auto-oscillators, however, appropriate materials and geometries need to be identified to yield feasible bias currents.
This work has been supported by NSF-funded MRSEC under Grant No. DMR-1420451. R.Z. thanks Fundación Ramón Areces for support through a postdoctoral fellowship within the XXVII Convocatoria de Becas para Ampliación de Estudios en el Extranjero en Ciencias de la Vida y de la Materia.
§ SUPPLEMENTAL MATERIAL
§ A THREE-DIMENSIONAL TOPOLOGICAL INSULATOR AS A HEAVY (SEMI)METAL
Strong three-dimensional topological insulators (TIs) represent an extreme case of strong spin-orbit interactions at the interface,<cit.> which relatively easily yields microscopic (model) expressions for the phenomenological parameters of the effective theory. It is worth mentioning that recent advances in electrical gating of magnetically-doped TIs<cit.> promise great tunability of these coupling coefficients (that govern the magnetic configuration).
We regard the conducting TI surface as a uniform 2D gas of electronic excitations with linearly-dispersing bands,<cit.> which couple to the antiferromagnetic texture through a local axially-symmetric (single-particle) exchange. This coupling can introduce the following time-reversal symmetry-breaking term into the effective Hamiltonian:<cit.>
ℋ̂_int=J_∥(l_xσ̂_x+l_yσ̂_y)+J_⊥l_zσ̂_z,
where J_∥,J_⊥ are the corresponding exchange constants. This term can be absorbed into the Dirac-like description of the surface states,<cit.> whose integration out<cit.> yields the following expressions for the phenomenological coefficients:
L_1=J_∥eζ_1+χ_1/4πħ v, L_2=J_∥J_⊥ζ_2+χ_2/4πħ v,
where ζ_1,ζ_2 are offsets reflecting valence-band physics far away from the Dirac point, and
χ_1(l_z,β,μ)=sinh(β J_⊥l_z)/cosh(β J_⊥l_z)+cosh(βμ),χ_2(l_z,β,μ)=sinh(βμ)/cosh(β J_⊥l_z)+cosh(βμ).
Here β, μ are the thermal factor 1/k_BT and the chemical potential of Dirac electrons, respectively, and v is the electron speed. According to the time-reversal symmetry considerations, the function ζ_1 must be odd in l_z, whereas ζ_2 is an even function (taken constant for simplicity).
§ NUMERICAL APPROACH
Solutions of Eq. (8) are calculated numerically by the following method: we first recast this second-order differential equation as a first-order dynamical system, Ẋ=F[X], with X^⊤=(l,l̇ ). Secondly, the trajectories of the Néel order in the phase space are obtained by integrating this dynamical system with the appropriate initial conditions and subject to the nonlinear constraint l^2≡1. These numerical trajectories are consistent with the fixed points, the limit cycles and the (attractive/repulsive) nearby dynamics predicted by the stability theory. The frequency ω characterizes the time evolution of the order parameter in a close vicinity of the two points l_FP,3, since its analytical expression is obtained from the linearization of the dynamical system at these (unstable) fixed points. It gives, however, a good approximation of the frequency of the self-oscillations of the Néel order beyond the current threshold j_c, which is extracted from the long-term dynamics of the numerical solutions.
§ HOMOGENEOUS DZYALOSHINSKII-MORIYA INTERACTION IN FILMS AND BILAYERS
In Fig. <ref>, we show two examples of quasi-2D crystal lattices for which a homogeneous Dzyaloshinskii-Moriya term ℱ_DM[l,m]=d·(l×m) is allowed, where d=dê_z is the Dzyaloshinskii vector along the normal to the interface. In the first example, (a), the magnitude d may be amplified by the presence of the heavy-metal substrate. In the second example, (b), no such interaction would exist without the inversion-symmetry-breaking substrate.
99
AFM-spintronics J. Basset, A. Sharma, Z. Wei, J. Bass and M. Tsoi, Proc. SPIE 7036, 703605 (2008); A. H. MacDonald and M. Tsoi, Philos. Trans. R. Soc., A 369, 3098 (2011); X. He et al., Nat. Mater. 9, 579 (2010); D. Sando, A. Barthlmy and M. Bibes, J. Phys. Condens. Matter 26, 473201 (2014).
AHE R. Shindou and N. Nagaosa, Phys. Rev. Lett. 87, 116801 (2001); H. Chen, Q. Niu and A.H. MacDonald, ibid. 112, 017205 (2014).
SSE S. Seki et al., Phys. Rev. Lett. 115, 266601 (2015); S. M. Wu et al., ibid. 116, 097204 (2016).
AFM-superfluidity E. B. Sonin, Solid State Comm. 25, 253 (1978); E. B. Sonin, Sov. Phys. JETP 47, 1091 (1978).
AFM-T S. Takei, B. I. Halperin, A. Yacoby and Y. Tserkovnyak, Phys. Rev. B 90, 094408 (2014).
STT1 A. S. Núñez, R. A. Duine, P. M. Haney and A. H. MacDonald, Phys. Rev. B 73, 214426 (2006); E. V. Gomonay and V. M. Loktev, Low Temp. Phys. 40, 17 (2014), and references therein.
STT2 T. Moriyama et al., Appl. Phys. Lett. 106, 162406 (2015); Y.-W. Oh et al., Nat. Nanotechnol. 11, 878?884 (2016).
Cheng-PRL2014 R. Cheng, J. Xiao, Q. Niu and A. Brataas, Phys. Rev. Lett. 113, 057601 (2014).
Cheng-PRL2016a R. Cheng, D. Xiao and A. Brataas, Phys. Rev. Lett. 116, 207603 (2016).
AFM A. Auerbach. Interacting Electrons and Quantum Magnetism, (Springer-Verlag, New York, 1994); S. Sachdev, Quantum Phase Transitions (Cambridge University Press, Cambridge, 1999).
Comm1
It reads s=ħ S L_t/𝒱, where L_t is the thickness of the antiferromagnetic film and S, 𝒱 are the (dimensionless) spin and volume per site, respectively.
Fert-PRL1980
A. Fert and P. M. Levy, Phys. Rev. Lett. 44, 1538 (1980).
DMI I. E. Dzyaloshinskii, Sov. Phys. JETP 5, 1259 (1957); T. Moriya, Phys. Rev. 120, 91 (1960).
Dzyaloshinskii-JETP1964 I. E. Dzyaloshinskii, Sov. Phys. JETP 19, 960 (1964).
Bogdanov-JMMM1994 A. Bogdanov and A. Hubert, J. Magn. Magn. Mater. 138, 255 (1994).
yt1
Note that constructing additional contributions to the free energy in terms of the normalized spin density m would only affect the Néel-order energetics at the subleading order in J^-1(∝χ), once m is integrated out.
SUP See Supplemental Material for a microscopic calculation of the phenomenological coefficients in the case of a TI substrate, for a brief account of the numerical methods, and for an illustration of two quasi-2D crystal lattices compatible with a homogeneous Dzyaloshinskii-Moriya interaction.
LoM E. M. Lifshitz and L. P. Pitaevskii, Statistical Physics, Part 2, 3rd ed., Course of Theoretical Physics, Vol. 9 (Pergamon, Oxford, 1980); E. M. Chudnovsky and J. Tejada. Lectures on Magnetism, (Rinton Press, New Jersey, 2006).
Gilbert-1955 T. L. Gilbert, IEEE Trans. Magn. 40(6), 3443-3449 (2004).
kimPRB15 S. K. Kim, O. Tchernyshyov, and Y. Tserkovnyak, Phys. Rev. B 92, 020402(R) (2015).
Comm4
The Gilbert tensor may generally be l-dependent and anisotropic in spin space. Its dependence on the (normalized) spin density m may, however, be disregarded, in the presence of a well-formed Néel order.
SHE M. I. Dyakonov and V. I. Perel, JETP Lett. 13, 467 (1971); J. E. Hirsch, Phys. Rev. Lett. 83, 1834 (1999).
Hals-PRL2011 K. M. D. Hals, Y. Tserkovnyak, and A. Brataas, Phys. Rev. Lett. 106, 107206 (2011).
STextT S. Zhang and Z. Li, Phys. Rev. Lett. 93, 127204 (2004); A. Thiaville, Y. Nakatani, J. Miltat, and Y. Suzuki, Europhys. Lett., 69 (6), 990 (2005); Y. Tserkovnyak, A. Brataas, and G. E. W. Bauer, J. Magn. Magn. Mater. 320, 1282 (2008).
Tserkovnyak-PRB2014 Y. Tserkovnyak and S. A. Bender, Phys. Rev. B 90, 014428 (2014).
Comm5 Reactive (i.e., nondissipative) torques of the form ∝(ê_z×j)×l and ∝(j·∇)l are forbidden due to breaking of the space-group symmetry {l→-l}. Spin-transfer torques dependent on the spin density m are disregarded due to its smallness, |m|∝ J^-1. The (symmetry-allowed) spin-orbit torque ∝(l·j)(ê_z×l) is omitted from our treatment but may in principle be present.
Tserkovnyak-RMP2005 Y. Tserkovnyak, A. Brataas and G. E. W. Bauer, Phys. Rev. Lett. 88, 117601 (2002); Y. Tserkovnyak, A. Brataas, G. E. W. Bauer, and B. I. Halperin, Rev. Mod. Phys. 77, 1375 (2005).
Comm6
The nonequilibrium description of the heterostructure would generally be complemented with the equation of motion for the charge-current density:<cit.> L∂_tj+ρ̂j=E+ϵ. Here, L is the self-inductance of the surface, ρ̂ is the 2×2 resistivity tensor, and ϵ is the motive force induced by the antiferromagnetic dynamics, which is the Onsager-reciprocal of Eq. (<ref>). Since our immediate interest in this paper is the dc current-driven dynamics of the insulating antiferromagnet, however, in what follows, we will assume that a certain (fixed) current is injected and maintained within the heavy-metal substrate.
Hirch M. W. Hirch, S. Smale and R. L. Devaney. Differential equations, dynamical systems and an introduction to chaos, (Academic Press, London, 2004).
Bender-PRL2012 S. A. Bender, R. A. Duine and Y. Tserkovnyak, Phys. Rev. Lett. 108, 246601 (2012), see the Supplemental Material.
TI-SOC Y. Fan et al., Nat. Mater. 13, 699-704 (2014); A. R. Mellnik et al., Nature 511, 449-451 (2014).
Cheng-PRL2016b R. Cheng, J.-G. Zhu and D. Xiao, Phys. Rev. Lett. 117, 097202 (2016).
Tserkovnyak-PRB2002 Y. Tserkovnyak, A. Brataas, and G. E. W. Bauer, Phys. Rev. B 66, 224403 (2002).
Comm7
We focus here on the class of insulating antiferromagnets with no intrinsic Dzyaloshinskii-Moriya couplings. Bulk contributions arise, for instance, in α-Fe_2O_3 (hematite), which exhibits weak ferromagnetism above the Morin temperature T_M=263 K (with the Dzyaloshinskii vector d pointing along the trigonal axis) but no magnetic (texture) superstructure|the space group D_3d^6 is centrosymmetric. On the contrary, FeGe and MnSi have an inhomogeneous Dzyaloshinskii-Moriya coupling (the space group T^4 is noncentrosymmetric) and can naturally exhibit helicoidal<cit.> and skyrmion-lattice phases.<cit.> In these cases, interface-induced Dzyaloshinskii-Moriya interactions may be of a lesser importance.
Fan-NatNano2016 Y. Fan et al., Nat. Nanotechnol. 11 352-359 (2016).
Comm3
This description of the surface states requires the Fermi level to lie in the vicinity of the Dirac point, with a possibility of being tuned across it.
SFN1
In the same spirit of our phenomenological constructions, we focus on the coupling directly to the Néel order (whose existence is of course subject to structural symmetries or the relevance of mesoscopic effects). Similar coupling to the net spin density m, albeit more generic, results in subleading effects for the Néel order.
ExHam
I. Garate and M. Franz, Phys. Rev. Lett. 104, 146802 (2010); K. Nomura and N. Nagaosa, Phys. Rev. B 82, 161401(R) (2010); Y. Tserkovnyak and D. Loss, Phys. Rev. Lett. 108, 187201 (2012).
Tserkovnyak-PRBRC2015 Y. Tserkovnyak, D. A. Pesin, and D. Loss, Phys. Rev. B 91, 041121(R) (2015).
|
http://arxiv.org/abs/1701.07503v1 | 20170125215817 | Angular distribution of single photon superradiance in a dilute and cold atomic ensemble | [
"A. S. Kuraptsev",
"I. M. Sokolov",
"M. D. Havey"
] | quant-ph | [
"quant-ph",
"physics.atom-ph"
] |
On the basis of a quantum microscopic approach we study the dynamics of the afterglow of a dilute Gaussian atomic ensemble excited by pulsed radiation. Taking into account the
vector nature of the electromagnetic field we analyze in detail the angular and polarization distribution of single-photon superradiance of a such ensemble. The dependence of
the angular distribution of superradiance on the length of the pulse and its carrier frequency as well as on the size and the shape of the atomic clouds is studied. We show that
there is substantial dependence of the superradiant emission on the polarization and the direction of fluorescence. We observe essential peculiarities of superradiance in the region of
the forward diffraction zone and in the area of the coherent backscattering cone. We demonstrate that there are directions for which the rate of fluorescence is several times
more than the decay rate of the timed-Dicke state. We show also that single-photon superradiance can be excited by incoherent excitation when atomic polarization in the ensemble
is absent. Besides a quantum microscopic approach, we analyze single-photon superradiance on the basis of the theory of incoherent multiple scattering in optically thick media
(random walk theory). In the case of very short resonant and long nonresonant pulses we derive simple analytical expressions for the decay rate of single-photon superradiance
for incoherent fluorescence in an arbitrary direction.
34.50.Rk, 34.80.Qb, 42.50.Ct, 03.67.Mn
Angular distribution of single photon superradiance in a dilute and cold atomic ensemble
A. S. Kuraptsev^1 and I. M. Sokolov^1,2
^1Department of Theoretical Physics, Peter the Great St. Petersburg Polytechnic University, 195251, St.-Petersburg, Russia
^2Institute for Analytical Instrumentation, Russian Academy of Sciences, 198095, St.-Petersburg, Russia
M. D. Havey
Department of Physics, Old Dominion University, Norfolk, VA 23529
==================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
Since the original work by Dicke <cit.>, the problem of superradiance, and it counterpart subradiance, have attracted great interest. By the end of the 1980's many aspects of
the physics of the superradiance problem had been studied in detail (see <cit.> and references therein). However, in the past decade, theoretical and experimental advances
have led to a rejuvenation of this field. In particular, theoretical predictions <cit.> and experimental studies have been made of single photon superradiance and the
collective Lamb shift in the X-ray regime <cit.>, in cold atoms <cit.>, and in quantum dots <cit.>.
Contrary to the "traditional" superradiance predicted by Dicke for polyatomic ensembles with sizes much smaller than the radiation wavelength, the single-photon superradiance
is observed for large-sized systems. It differs also from well studied superfluorescence of long and extended ensembles with a large number of initially excited atoms.
Single-photon superradiance is a linear-optics effect and it can take place for dilute atomic systems under excitation by a weak pulse of radiation. By now the main features of
these effects have been studied both theoretically and experimentally <cit.> (and references therein).
Due to the effect of coherent forward scattering, the main part of the radiation pulse absorbed by the extended atomic ensemble is scattered into directions close to that of the
exciting light pulse <cit.>. In this connection the main attention was given to the properties of superradiance emitted in the forward direction. Particularly it was shown
that one way to obtain strong coherent emission from the ensembles is to prepare a timed-Dicke state <cit.>. However, as was shown in <cit.> and studied in a sophisticated
experiment <cit.> superradiance can be observed in directions outside the bounds of the main diffraction cone. Moreover, the fluorescence decay rate in these directions can
exceed the decay rate of the timed-Dicke state responsible for forward radiation. In several works <cit.> the time dependence of fluorescence in certain, fixed directions
was studied. The main goal of the present paper is to study theoretically the angular distribution of superradiance in more detail. Among other things, we will show that
superradiance is characterized by a strong and essentially nonmonotonic angular dependence.
In analysis of the angular dependence of single-photon superradiance, the polarization properties of the fluorescence may play an important role. In experiments <cit.> the
total intensity of scattered light was measured. The polarization dependence was not analyzed in detail. For radiation coherently scattered into the forward direction the
polarization of superradiance should coincide with that of incident light. In <cit.> it was experimentally confirmed in the case of the linear polarization of the excitation
pulse. So the polarization need not be studied. In the case of sideways scattering it is not the case. Decay of fluorescence in different polarization channels has in the general
case different rates. For this reason, in studying single-photon superradiance, we will make additional analysis of its polarization properties.
Another goal of the present work is to study the influence of the type of excitation on single-photon superradiance. For scattering in the main diffraction lobe this influence
has been studied in great length. Particularly it has been discussed how the nature of superradiance changes if the initial state differs from a timed-Dicke state. In this work
we will consider the influence of the pulse duration on the angular distribution of superradiance. Broadly speaking, single photon excitation is not necessary for superradiance.
In the paper <cit.> it was shown that the initial spatially extended atomic coherence is important. In real optical experiments excitation of the atomic ensemble is performed
by means of a light pulse. In experiment <cit.> it was a short pulse with a length less than the lifetime of the excited states of the free atom. In <cit.> the pulse
duration exceeded essentially the lifetime. For a large detuning of the carrier frequency of the pulse the ensemble is optically thin and all atoms were excited with the same
probability as in a timed-Dicke state. At the same time the scattering at a large angle is incoherent and this raises the question of whether coherent excitation is necessary
for observation of single-photon superradiance beyond the main diffraction lobe. In this work we consider not only the dependence of the angular distribution of the
superradiance on the length of the exciting pulse but also the possibility to observe superradiance in the case of noncoherent excitation. We will show that it is indeed
possible.
Finally, we will study the dependence of the angular distribution of the decay rate on the shape of the atomic ensemble. We will consider how this distribution is modified by
the change of the aspect ratio of an elliptical sample with a Gaussian density distribution.
§ BASIC ASSUMPTIONS AND APPROACH
In our calculations of time-dependent fluorescence we will follow the theoretical approach developed previously in <cit.>. In the framework of this approach we solve the
nonstationary Schrodinger equation for the wave function ψ of the joint system consisting of all atoms and a weak electromagnetic field. A vacuum reservoir is also included
in our considerations.
We consider a disordered atomic cloud of N two-level atoms. All atoms have a ground state |g⟩ with the total angular momentum J_g = 0, an excited state |e⟩
with J_e = 1, a transition frequency ω_a, and a natural lifetime of the excited state τ_0 = 1/γ. Taking into account the experimentally relevant situation of
a cold atomic cloud we assume atoms to be motionless and located at random positions 𝐫_i, (i = 1, . . . ,N). Possible atomic displacement caused by residual atomic
motion is taken into account by averaging of calculated quantities over random spatial distribution of the atoms.
We seek the wave function ψ as an expansion in a set of eigenfunctions of the Hamiltonian H_0 of the noninteracting atoms and field. The key simplification of the
approach employed is in the restriction of the total number of states taken into account. Assuming that the exciting radiation is weak, which is typical in experiments
<cit.>, we take into account only states with no more than one photon in the field. As it was shown in <cit.>-<cit.>, this approximation allows us to describe
collective effects under scattering of weak radiation, including pulsed radiation.
Knowledge of the wave function gives us information about the properties of the atomic ensemble as well as the properties of the secondary radiation. In particular, the
intensity I_α (Ω,t ) of the light polarization component α that the atoms scatter in a unit solid angle around an arbitrary direction given by
radius-vector 𝐫 (Ω=θ,φ) can be determined as follows
I_α (Ω,t )=c/4π⟨ψ| E^(-)_α(𝐫) E^(+)_α(𝐫) |ψ⟩ r^2.
Here E^(±)_α(𝐫) are the positive and negative frequency parts of the electric field operator.
In the case of pulsed excitation, the mean value in this expression depends on time. The corresponding dependence can be found as the inverse Fourier transform (for more details
see <cit.>)
⟨ψ| E^(-)_α(𝐫) E^(+)_α(𝐫) |ψ⟩ = |∫_-∞^∞ħexp(-iω t)dω2π.
. ∑_e,e^'Σ_α e(ω )R_ee^'(ω )
Λ_e^'(ω ) | ^2.
Here the vector Λ_e(ω ) describes excitation of different states of different atoms by external pulsed radiation
Λ_e(ω)=-𝐝_e;g𝐄(ω)/ħ=
-𝐮𝐝_e;g/ħE_0(ω)exp(i𝐤𝐫_𝐞),
In this equation 𝐝_e;g is the dipole matrix element for the transition from the ground g to the excited e state of the atom, E_0(ω) is a Fourier
amplitude of the probe radiation, which we assume to be a plane wave, 𝐤 and 𝐮 are its wave vector and unit polarization vector, 𝐫_e is the
radius-vector of the atom e.
The matrix R_ee^'(ω ) is the resolvent of the considered system projected on the one-fold atomic excited states
R_ee^'(ω )=[ (ω -ω _e)δ _ee^'-Σ _ee^'(ω)] ^-1.
In this work we determine it numerically on the basis of the known expression for the matrix Σ _ee^'(ω). Matrix elements Σ _ee^'(ω )
for e and e^' corresponding to different atoms describe excitation exchange between these atoms
Σ _ee^'(ω )=∑_μ ,ν𝐝_e_a;g_a^μ𝐝_g_b;e_b^ν/ħ r^3×
[ δ
_μν( 1-iω _ar/c-( ω _ar/c) ^2) exp( iω _ar/c) . -
.
-𝐫_μ𝐫_νr^2( 3-3iω _ar/c-( ω _ar/c) ^2) exp( iω _ar/c)
].
This expression is written assuming that in states ψ _e^' and ψ _e atoms b and a are excited correspondingly, we used also the pole approximation
(Σ _ee^'(ω )=Σ _ee^'(ω_a ), see <cit.>). In (<ref>) 𝐫_μ is projections of the vector
𝐫=𝐫_a-𝐫_b on the axes of the chosen reference frame and r=|𝐫| is the spacing between atoms a b.
If e and e^' correspond to excited states of one atom then Σ _ee^'(ω ) differs from zero only for e=e^' (i.e. m=m^', where m is
magnetic quantum number of the atomic excited state). In this case
Σ _ee(ω )=-iγ/2.
The matrix Σ_α e(ω ) in (<ref>) describes light propagation from an atom excited in the state e to the photodetector. In the rotating wave
approximation it is (see <cit.>)
Σ_α e(ω)=-𝐮'_α^∗𝐝_g;e/ħ r ( ω/c) ^2exp( iω|𝐫-𝐫_e|/c)
≈
-𝐮'_α^∗𝐝_g;e/ħ r ( ω/c) ^2exp( iω r /c-i𝐤'𝐫_e/c) .
Here 𝐮'_α^∗ is a unit polarization vector of the scattered wave and 𝐤' is its wave vector.
Substituting (<ref>) and (<ref>) into (<ref>), after some simplifications we have
I_α(Ω,t )=c/4πħ^2|∫_-∞^∞E_0(ω)k^2exp(-iω t)dω2π.
∑_e,e^'.(𝐮^'∗𝐝_g;e) R_ee^'(ω ) (𝐮𝐝_e^';g) exp(
i(𝐤𝐫_e^'-𝐤^'𝐫_e)) | ^2.
The total intensity I(Ω,t) can be obtained as a sum of (<ref>) over two orthogonal polarizations α.
Note that approaches similar to those described in this paper are used to analyze the atomic decay or dynamics of fluorescence in several works <cit.>-<cit.>. In the
main part of the mentioned references, the scalar approximation was used. It is known that for the dilute clouds we are interested in here this approach is quite appropriate
for description of a whole series of properties <cit.>-<cit.> (if one takes into account that it underestimates optical thickness, see below). However, in the present
work we are going to study the polarization dependence of fluorescence and we have to avoid this simplification.
In the next section, we will use relation (<ref>) to analyze temporal, polarization and angular properties of the scattered light.
§ RESULTS AND DISCUSSION
In the framework of our approach we can analyze atomic ensembles with arbitrary shape and spatial distribution of atoms. In the present work we will consider axially symmetric
Gaussian clouds having an average density distribution given by
n(𝐫)=n_0 exp(-z^2/2L^2-x^2+y^2/2R^2).
The incident light is a plane wave propagating in the z direction except for the case when we consider incoherent excitation. In the latter case we will consider quasi isotropic
irradiation from all directions. For illustrative purposes, we will restrict our consideration to temporally rectangular pulses having a central frequency ω_L. The length
of the pulse is τ_L. We will assume that the zero-time reference t = 0 corresponds to the end of the exciting pulse. In all calculations the incident light is left-handed
circularly polarized.
In the following we will use Eq. (<ref>) averaged over the ensemble of possible atomic configurations to study the average intensity of the time dependent fluorescence
⟨ I_α(Ω,t )⟩. Collective effects not only accelerate the fluorescence in some directions but also modify the functional form of ⟨
I_α(Ω,t )⟩ making it nonexponential. To analyze the peculiarities of the time dependence we introduce a current decay rate as follows
Γ_α(Ω,t)=∂ ln⟨ I_α(Ω,t )⟩/∂ t.
This rate is a function of time t. For the total fluorescence without polarization analysis we will use a similar relation but with summed intensity ⟨
I(Ω,t )⟩ =∑_α⟨ I_α(Ω,t )⟩.
§.§ Angular distribution of scattered light
In Fig. 1 we show the angular distribution of light scattered in different polarization channels. The calculation is performed for a spherically symmetric atomic ensemble. The
radius of the Gaussian distribution is R=L=25. Hereafter in this paper we use as a unit of length, where = λ/2π. The peak
density is n_0=0.005. Fig. 1a corresponds to a time equal to t=τ_0 after the exciting pulse is switched off. It demonstrates very different behaviors for different
polarizations of the scattered light. In the helicity preserving channel (HH) we see a typical diffraction picture. There is large main diffraction peak. Two higher order
peaks are also well distinguished, although because of a smooth decrease in the concentration at the edges of the Gaussian cloud, these peaks are not as distinct as for
diffraction from objects with sharp boundaries. The scattering into the back half-sphere is suppressed in this polarization channel. For the helicity-nonpreserving (H H)
polarization channel the main part of the radiation is scattered into the backward direction. The intensity is mainly determined by single scattering from the boundary region.
For the HH channel single scattering in the exact backward direction is absent because of selection rules for atomic electric dipole transitions.
The angular distribution of fluorescence changes with time. In Fig. 1b this effect is shown for t=5τ_0. For this time, the main contribution to the fluorescence is
determined by multiply scattered light. The difference between polarization channels becomes less evident. Further, the angular distribution becomes more spherically symmetric
in each channel. However even at this time interval we see some traces of a diffraction picture. In both channels we see also the cone-shaped feature associated with coherent
backscattering (see for example reviews <cit.> and references therein). The enhancement factor for the helicity preserving channel is close to two which is typical for a
0-1 transition. For another channel it is much less because of the single scattering contribution to the background <cit.>.
We calculated angular dependencies like those shown in Fig.1 for different instants of time and thus determined the current decay rate (<ref>) for fluorescence in any
direction and for any polarization channels. Consider at first, however, the decay rate for the total intensity as it was made in experiments <cit.>.
In Fig. 2 we show the angular dependence of Γ(Ω,t) averaged over some time intervals Δ t. For a clearer demonstration we displaced the graphs along
the abscissa.
Fig. 2 demonstrates the essential angular dependence, especially near the forward and backward directions. Such dependence takes place for all considered time intervals. For the
short time after the excitation pulse is switched off the superradiance is observed for radiation emitted in any directions (solid line). For the very beginning of the
fluorescence the sideways scattering is characterized by a faster decay than forward one.
The maximal decay rate corresponds to an angle which depends on the size of the system (see below). Beginning with some time, the decay rate changes the sign for definite
angular intervals. This means that for corresponding time and angular intervals the intensity of fluorescence increases. Here we see manifestation of oscillation in the
afterglow of the atomic ensemble connected with quantum beating and caused by interference of light scattering through different collective states (see <cit.>).
With time the spatial distribution of excited atoms in the cloud changes and the transverse distribution of emittance of the cloud changes as well. It causes modification of the
image of the diffraction. In Fig. 2b we show the angular dependence of the decay rate for the region of the diffraction pattern on a large scale. One can see that the
diffraction picture transforms with time. Particularly, the separation between pairs of diffractions peaks changes. In our view, the transformation of the diffraction pattern is
responsible for the unusual angular dependence shown in Fig. 2. The intensity in a given direction changes not only because of decay of collective states but also because of
alteration in the direction of emission. The maximal decay rate is observed in directions of diffraction minima.
The difference in decay rates (<ref>) for different directions of fluorescence is shown also in Fig. 3. Just after the end of the exciting pulse (t=0)
Γ(Ω,0)>γ for any direction. As time passes Γ(Ω,t) changes. The afterglow into the forward direction maintains a high value for
the longest period of time. For t up to t=4τ_0 its value practically coincides with Γ(0,0). It means that for this period of time the contribution of the
timed-Decay state into the forward emission is dominant. The decay rate of radiation into the backward half-sphere (θ>π/2) decrease monotonically and for the considered
condition it loses its superradiant properties for t≥τ_0. Fluorescence into diffraction minima and higher order maxima demonstrate nonmonotonic, oscillatory behavior.
Two curves for θ=0.065 and θ=0.075 show that the decay rate can increase several times during the afterglow as well as decrease up to relatively large negative
values.
The dynamics of the fluorescence in different polarization channels (see Fig. 4) is even more complicated than that of the total light intensity. It is connected with the
absence of single scattering in these channels into some specific direction. That is why the light intensity increases just after the pulse ends in the forward direction for the
case of H H and for the backward direction for H H. In these directions the decay rate (<ref>) is negative and is relatively large in absolute value. With time the
contribution of high order scattering increases and we observe the usual decreasing of decay rate.
§.§ Dependence on type of excitation
§.§.§ Dependence on the excitation duration
In many cases, in theoretical papers devoted to single-photon superradiance the decay of the timed-Dicke state is discussed. However excitation of a real physical system into
such a state is a separate and difficult problem. The authors of <cit.> found an original way to do so. In more traditional experiments with cold gases the excitation is
performed by means of pulsed radiation. The length of the pulse strongly influences the type of prepared atomic states and the consequent fluorescence. In the case of a short
pulse many different collective states in a wide spectral region are excited and the type of decay may differ essentially from the decay of a timed-Dicke state. In the case of a
very long pulse we have a quasi steady state distribution of atomic excitation which differs from the distribution of the Dicke state. For the resonant excitation it is
connected with absorption and for a nonresonant one with dispersion <cit.>. The spatial distribution of phases of the atomic oscillators is determined not by the wave
number of the exciting light but by the light wavelength in the medium.
The dependence of the nature of decay on the type of excitation for the forward scattering was discussed earlier (see for example <cit.>). We will focus our attention
on its influence on the angular distribution of single-photon superradiance. The corresponding dependence for the most interesting angular region is shown in Fig. 5. We
demonstrate the angular dependence of decay rate both for very short and very long pulses. For the exact forward direction we see the weakest changes when the length of the
pulse changes. It is connected with the fact that short lived collective states responsible for forward scattering have a larger width and they are effectively excited
independently of pulse length (spectral width of the pulse). At the same time, as it is seen from Fig. 6, the length of the pulse influences the duration of the process of
superradiance. As τ_L increases the duration grows shorter. This arises through the increasing role of sub-radiant collective states which are not practically shifted in
frequency. These states have a relatively small width and are weakly excited by a short pulse.
For the directions where we observe a sharp angular dependence the length of the pulse influences at the very beginning of fluorescence. Because of the spatial inhomogeneity of
Gaussian clouds the length of the resonant pulse causes transformation of not only longitudinal (along light propagation) but also the transverse distribution of excited atoms.
In its turn it causes transformation of the diffraction patterns. This is illustrated by Fig. 5.
Besides changes in width of the diffraction pattern we see a qualitative difference in angular dependence. For τ_L=0.1τ_0 there is only relatively small maxima whereas
already for τ_L=5τ_0 the maxima become more sharp, their amplitudes decreases essentially with angles and in some regions the decay rate changes sign. For further
increasing of τ_L we observe only small quantitative changes. A saturation type effect takes place. Increasing τ_L from τ_L=5τ_0 up to τ_L=100τ_0
practically changes neither intensity of fluorescence nor its rate. It means that for the considered cloud for t=5τ_0 quasi static regime is realized.
The influence of excitation duration on fluorescence is also demonstrated in Fig. 7. Here we show the time dependence of the decay rate of fluorescence in different directions
for two pulse lengths τ_L=0.1τ_0 and τ_L=100τ_0. The qualitative difference between short (a) and long (b) excitation is that for a long pulse there are
directions for which intensity begins to increase immediately after the end of the pulse. Fig. 7 shows also that the excitation duration changes the nature of quantum beating.
§.§.§ Single-photon superradiance for incoherent excitation
The features of decay of a timed-Dicke state is essentially connected with phase matching of different atomic oscillators in the ensemble. Superradiance beyond the diffraction
zone is caused by incoherent scattering. In this connection the question arises whether it is necessary to use coherent excitation to observe sideward superradiance. Or it is
possible to observe this superradiance for incoherent excitation for complete absence of phase correlation. We performed calculation of atomic fluorescence assuming that
different atoms in the Gaussian cloud are excited independently. In such a case the phases of different atoms are random and average atomic polarization is absent.
Analyzing the time dependence of the fluorescence we calculated the current decay rate for an exciting pulse of different lengths. The results are shown in Fig. 8.
The calculation was made for the case when the carrier frequency of the pulse coincides with the resonant frequency of the free atoms. It is seen that for short pulses we
observe a decay with a rate that exceeds the decay rate of the free atom i.e. we observe superradiance. For long resonant pulses the superradiance is absent. However, in the
paper <cit.> it was shown that long coherent nonresonant excitation causes superradiance of fluorescence in sideways directions. For this reason we studied how the decay rate
depends on carrier frequency of the radiation in the case of incoherent excitation.
In the following picture we show the corresponding dependence for two lengths of the pulse. The calculation is made for a spherically symmetric Gaussian cloud with radius R=25
and peak density n=0.005. Because of spherical symmetry (on average) of the cloud and excitation the secondary radiation of the atomic ensemble is spherically symmetric.
For the short pulse τ_L=0.1τ_0, and because of its large spectral width the decay rate does not practically depend on carrier frequency. On the contrary for long pulse
τ_L=100τ_0 we see an essential dependence and, like in the case of coherent excitation <cit.>, increasing of detuning causes increasing of decay rate up to some
constant magnitude which depends on the size of the cloud, density and which exceeds γ. The typical region of essential alteration of the rate is about the natural width
of the transition of the free atom. For this spectral region the optical depth of the cloud is big and for long pulses we have a quasi steady state regime of atomic excitation.
The atomic excitation under such conditions is determined not only by the external radiation but also by trapped light <cit.>. By the time of the end of the exciting pulse,
the fluorescence is determined by photons scattered a different number of times inside the medium. For optically dense media scattering of high order plays an important role.
After the end of the pulse we still see contributions of different order scattering but only the single scattering is responsible for superradiance. The contributions of higher
order decay are much slower. That is why we do not see superradiance for long resonant pulse excitation. For nonresonant light the optical thickness is small and single
scattering in the sideways direction gives the main contribution and superradiance can be seen.
§.§ Single-photon superradiance for ensembles of different sizes
In this section we consider the dependence of angular distribution of superradiance on the sizes of the atomic ensemble. We will analyze both dependence on longitudinal and
transverse sizes.
Let us consider at first how the angular distribution of the decay rate changes with length L of the Gaussian clouds for fixed transverse radius R. Results of the
corresponding calculations for several L and R=20 are shown in the Fig. 10. The density in the center of the atomic ensemble is equal to n=0.002. The decay rate increases
with L for all directions. But the specific dependence is different for different angles θ.
For the forward direction we have
Γ(θ=0,t→ 0)=γ(1+b_0z/8),
where b_0z=√(2π)σ_0n_0L is the maximal resonant thickness of the Gaussian cloud along the z direction. This expression coincides with results obtained earlier
in <cit.> if we take into account that the authors of <cit.> considered a scalar model of the radiation which underestimates the optical depth of the cloud by the factor
1.5.
The scattering in the near backward direction requires special attention. We see that for oblong clouds the decay rate into this direction (θ=π) even exceeds that for a
timed-Dicke one (θ=0). Note also that the decay rate for fluorescence at θ=π/2 also increases in spite of the fact that the transverse size is fixed.
Such angular dependence as well as many important regularities of single photon superradiance can be understood in the framework of a random walk approach without analysis of
collective states of a polyatomic ensemble usually used in such a case. The random walk approach is very effective for description of incoherent multiple scattering in optically
thick but dilute media (see for example <cit.>-<cit.>). The light transport in a dilute medium generally performs a diffusion-type process, which in a semiclassical
picture can be visualized as a forwardly propagating wave randomly scattered by medium inhomogeneities. In an optically dense sample this process generates a zigzag-type path
consisting of either macroscopically or mesoscopically scaled segments of forwardly propagating waves. The forwardly propagating incoming, secondary and multiply scattered waves
can be expressed via a retarded-type Green's propagation function. This propagation function is completely described by the macroscopic susceptibility tensor. The incoherent
scattering events, which randomly happen in the medium, can be probabilistically simulated and properly described with the scattering theory formalism.
In practically important cases when sideward superradiance is observed, i.e. for short resonant or long nonresonant pulses the time dependence of the incoherent fluorescence,
just after the end of the exciting pulse, can be described by taking into account only single incoherent scattering. The single scattering approximation is valid for a not very
big average optical depth b_0 of the cloud – b_0τ_0/τ_L≲ 1 (for short pulses) or b_0τ_0Δ_L≲ 1 (for long
nonresonant pulse). In such cases the intensity I^s_α(Ω,t ) can be calculated as follows
I^s_α(Ω,t )=∫cn(𝐫)/4πħ^2 d^3 r|∫_-∞^∞E_0(ω)exp(-iω t)dω2π.
k^2χ(𝐫,𝐫_a,ω)∑_e. (𝐮^'∗𝐝_g;e)
(𝐮𝐝_e;g)/ω-ω_a+iγ/2χ(𝐫_a,𝐫_0,ω) | ^2.
Here the function χ(𝐫_a,𝐫_0,ω) describes propagation of light from the source to the point 𝐫_a where a single incoherent scattering event
takes place. The function χ(𝐫,𝐫_a,ω) describes propagation of a secondary photon toward the photodetector. In the isotropic medium these functions
are determined as follows
χ(𝐫_2,𝐫_1,ω)=exp(-i b_0(𝐫_2,𝐫_1)/2γ/2/ω-ω_a+iγ/2) ,
where the resonant optical thickness of the inhomogeneous cloud between points 𝐫_1 and 𝐫_2 for considered case of J=0↔ J=1 transition is
b_0(𝐫_2,𝐫_1)=6π^2∫_𝐫_1^𝐫_2n(𝐫)d𝐬.
Expression (<ref>) can be used for calculation of the decay rate for all directions except in the zones of backward and forward scattering. For forward scattering the main
contribution comes from the coherent component of the scattering light and for backward direction one of the polarization components is absent for single scattering and
scattering of higher order should be taken into account. Equation (<ref>) is also not valid for the cloud with a large aspect ratio. In such a case diffraction effects play
essential role <cit.> and the propagation function χ cannot be described by Eq. (<ref>).
The integral over frequency ω in (<ref>) can be calculated on the basis of the theory of residues. Restricting by the case of a typical experimental situation without
polarization analysis and taking into account that for rectangular pulse for t>τ_L only the pole ω=ω_a-iγ/2 is important, we have
Γ(Ω,t→ 0)=γ(1+b_0(𝐫)/2)=
γ(1+b_0z/8(1+ √(2)R/√(R^2+L^2+(R^2-L^2)cos(2θ)))).
Here b_0(𝐫) is the total optical length of the resonant light ray coming from the source and incoherently scattered in the point 𝐫 toward the detector
located along the direction θ; b_0(𝐫) is the value averaged over all atoms in the cloud.
In Fig. 11 we demonstrate the adaptability of Eq. (<ref>) for description of the angular distribution of the decay rate of single-photon superradiance. In this figure we
compare results of a quantum microscopic approach and approximate calculation of Γ(Ω,t→ 0) on the basis of Eq. (<ref>). It is clear that for
very small time interval Δ t=(0-0.01)τ_0 we have a good qualitative agreement. For bigger intervals Δ t=(0-0.1)τ_0 some quantitative discrepancy caused by
scattering of higher order appears, but qualitatively the angular dependence of Γ(Ω,t→ 0) is reproduced by the Eq. (<ref>) quite well.
Note that in much the same way as we get Eq. (<ref>), we can obtain expression (<ref>) for forward scattering if we consider only the coherent component of transmitted
light.
Good agreement between a microscopic approach and single scattering random walk approximation allows us to give a simple explanation of sideward superradiance in the considered
cases. The calculation based on Eq. (<ref>) shows that independently of the carrier frequency of the pulse the properties of secondary radiation after single scattering is
determined by optical depth of the cloud for quasi resonant radiation. Physically it is connected with the known fact that without external action any vibrating system
oscillates at its eigenfrequencies. For dilute media these frequencies are close to the free atom resonant frequency. Propagation of such quasi resonant radiation is accompanied by
spectral transformation. For not very big optical depth this propagation leads to broadening of the spectrum and consequently to acceleration of fluorescence. For large optical
depth distortion of the spectrum can be more essential. In such a case scattering of higher order should be taken into consideration.
§ CONCLUSIONS
In this paper we analyzed the time-dependent fluorescence of dilute Gaussian clouds of cold atoms excited by a weak quasi resonant light pulse. The calculation was performed on the basis of the quantum microscopic approach. Solving the nonstationary Schrödinger equation for the joint system consisting of atoms and a weak electromagnetic field we
calculated the angular distribution and polarization properties of the fluorescence.
We focused our attention on the initial stage of fluorescence where superradiance was expected. Calculating transformation of the angular distribution of the afterglow of the ensemble with time we observed that for total emission without polarization analysis superradiance took place in any direction if the length of the pulse less or comparable with
natural lifetime of atomic excited states. Besides that there is substantial dependence of superradiance on the direction of fluorescence, especially in the region of the diffraction pattern and in the angular area of the coherent backscattering cone. Maximal decay rate is observed not for the forward direction but at some angle which is
determined by the transverse size of the cloud. This maximal value is several times more than the decay rate of the timed-Dicke state. Time-dependent fluorescence in separate polarization channels is more complicated. For a short exciting pulse there are directions where the corresponding polarization component does not decrease, but increases
initially. For example, for the helicity preserving channel it takes place for a direction close to the backward direction, for non preserving channels we see increasing of intensity into the forward direction.
For long coherent pulses the nature of fluorescence decay essentially depends on the frequency as was predicted in <cit.>. For resonant radiation the superradiance is observed only in the forward direction. Moreover, there are directions near the main diffraction maximum where the total intensity summed over two orthogonal polarizations increases just
after the end of exciting pulse. As the carrier frequency shifts from exact atomic resonance the decay rate in sideward directions increases and for some detunings superradiance takes place (see also <cit.>).
We repeated analysis of single-photon superradiance for the incoherent excitation and found that the superradiance can be observed in this case, i.e. when atomic polarization in the ensemble is absent. It can be excited either by a short pulse or by a long nonresonant one.
We studied the dependence of the angular distribution of superradiance on the size and shape of the atomic ensemble. Besides a sharp features connected with the diffraction pattern, and the coherent backscattering cone, we observed noticeable transformation of this dependence caused by changes in aspect ratio of the cloud. The decay rate is
determined by average optical depth of the cloud for singly scattered photon.
Besides a quantum microscopic approach we analyzed single-photon superradiance on the basis of random walk theory. We showed that for not very big optical depth of the cloud, and in the case of very short resonant and long nonresonant pulses, the time dependence of the incoherent fluorescence just after the end of the pulse can be described by
taking into account only single incoherent scattering. In such a case we derive simple analytical expressions for the decay rate of single-photon superradiance in an arbitrary direction.
This expression was obtained under the assumption of motionless atoms and for very simple atomic level structure. However it is quite clear that a random walk approach can be applied for the more general case, for example for moving atoms with hyperfine level structure. Scattering of any order also can be taken into account. In our opinion this
approach would be useful in the presence of a control field. Such a field changes the spectral properties of the medium essentially which strongly influences the incoherent scattering under conditions of electromagnetically induced transparency <cit.> and may modify single-photon superradiance in cold and dilute atomic gases.
We appreciate financial support by the Russian Foundation for Basic Research (Grant No. RFBR-15-02-01013). A.S.K. also thanks RFBR-16-32-00587 and the Council for Grants of the
President of the Russian Federation. We also acknowledge financial support by the National Science Foundation (Grant No. NSF-PHY-1606743).
99
1 R. H. Dicke, Phys. Rev. 93, 99 (1954).
2 M. Gross and S. Haroche, Phys. Rep. 93, 301 (1982).
3 M. O. Scully, E. S. Fry, C. H. Raymond Ooi, and K.Wòdkiewicz, Phys. Rev. Lett. 96, 010501 (2006).
4 Marlan O. Scully, Laser Physics 17, 635 (2007).
5 R. Röhlsberger, K. Schlage, B. Sahoo, S. Couet, and R. Rüffer, Science 238, 1248 (2010).
6 S. J. Roof, K. J. Kemp, M. D. Havey, and I. M. Sokolov, Phys. Rev. Lett. 117, 073003 (2016).
7 M. O. Araujo, I. Kresic, R. Kaiser, and W. Guerin, Phys. Rev. Lett. 117, 073002 (2016).
7a Rafael A. de Oliveira, Milrian S. Mendes, Weliton S. Martins, Pablo L. Saldanha, Jose W. R.
Tabosa, Daniel Felinto, Phys. Rev. A 90, 023848 (2014).
7b Petru Tighineanu, Raphaël S. Daveau, Tau B. Lehmann, Harvey E. Beere, David A. Ritchie, Peter Lodahl, and Søren Stobbe, Phys. Rev. Lett. 116, 163604 (2014).
8 S. L. Bromley, B. Zhu, M. Bishof, X. Zhang, T. Bothwell, J. Schachenmayer, T. L. Nicholson, R. Kaiser, S. F. Yelin, M. D. Lukin, A. M. Rey, and J. Ye, Nat. Commun. 7, 11039 (2016).
9 M. O. Scully, Phys. Rev. Lett. 102, 143601 (2009).
10 S. E. Skipetrov, I. M. Sokolov and M. D. Havey Phys. Rev. A 94, 013825 (2016).
11 S. Prasad and R. J. Glauber, Phys. Rev. A 82, 063805 (2010).
12 I. M. Sokolov, D. V. Kupriyanov, and M. D. Havey, J. Exp. Theor. Phys. 112, 246 (2011).
13 I. M. Sokolov, A. S. Kuraptsev, D. V. Kupriyanov, M.D. Havey, S. Balik, J. Mod. Opt. 60, 50 (2013).
14 Ya. A. Fofanov, A. S. Kuraptsev, I. M. Sokolov and M. D. Havey, Phys. Rev. A 84, 053811 (2011).
15 Ya. A. Fofanov, A. S. Kuraptsev, I. M. Sokolov and M. D. Havey, Phys. Rev. A 87, 063839 (2013).
15a A. S. Kuraptsev and I. M. Sokolov, Phys. Rev. A 91, 053822 (2015).
16 S. E. Skipetrov and I. M. Sokolov Phys. Rev. Lett. 114, 053902 (2015).
17 S. Balik, A. L. Win, M. D. Havey, I. M. Sokolov and D. V. Kupriyanov, Phys. Rev. A 87, 053817 (2013).
18 P. W. Milonni and P. L. Knight, Phys. Rev. A 10, 1096 (1974).
17a M. J. Stephen, J. Chem. Phys. 40, 669 (1964).
17b D. A. Hutchinson and H. F. Hameka, J. Chem. Phys. 41, 2006 (1964).
17c R. H. Lehmberg, Phys. Rev. A 2, 883 (1970).
17d O. Morice, Y. Castin, and J. Dalibard, Phys. Rev. A 51, 3896 (1995).
17e J. Ruostekoski and J. Javanainen, Phys. Rev. A 56, 2056 (1997).
17f M. Rusek, A. Orlowski, and J. Mostowski, Phys. Rev. E 53, 4122 (1996).
17g H. Fu and P. R. Berman, Phys. Rev. A 72, 022104 (2005).
17g0 I. E. Mazets and G. Kurizki, J. Phys. B: At. Mol. Opt. Phys. 40, F105 (2007).
17g1 A. A. Svidzinsky, J.-T. Chang, Phys. Rev. A. 77, 043833 (2008).
17g2 P. W. Courteille, S. Bux, E. Lucioni, K. Lauber, T. Bienaimé, R. Kaiser, and N. Piovella, Eur. Phys. J. D. 58, 69 (2010).
17g3 R. Friedberg and J. T. Manassah, Phys. Lett. A 374, 1648 (2010).
17g4 A. S. Kuraptsev and I. M. Sokolov, Phys. Rev. A 90, 012511 (2014).
17h S. D. Jenkins, J. Ruostekoski, J. Javanainen, R. Bourgain, S. Jennewein, Y. R. P. Sortais, A. Browaeys, Phys. Rev. Lett. 116, 183601 (2016).
17j S. D. Jenkins, J. Ruostekoski, J. Javanainen, S. Jennewein, R. Bourgain, J. Pellegrino, Y. R. P. Sortais, A. Browaeys, Phys. Rev. A 94, 023842 (2016).
18a I. M. Sokolov, D. V. Kupriyanov, R. G. Olave and M. D. Havey, J. Mod. Opt. 57, 1833 (2010).
18b S. E. Skipetrov and I. M. Sokolov, Phys. Rev. Lett. 112, 023905 (2014).
18c L. Bellando, A. Gero, E. Akkermans, and R. Kaiser, Phys. Rev. A 90, 063822 (2014).
19 D. V. Kupriyanov, I. M. Sokolov, C. I. Sukenik, and M. D. Havey, Laser Phys. Lett. 3, 223 (2006).
20 G. Labeyrie, Mod. Phys. Lett. B 22, 73 (2008).
21 R. Friedberg and J. T. Manassah, Laser Phys. Lett. 4, 900 (2007).
22 T. Bienaimé, M. Petruzzo, D. Bigerni, N. Piovella, and R. Kaiser, J. Mod. Opt. 58, 1942 (2011).
23 D. V. Kupriyanov, I. M. Sokolov, P. Kulatunga, C. I. Sukenik, and M. D. Havey, Phys. Rev. A 67, 013814 (2003).
24 G. Labeyrie, D. Delande, C. A. Müller, C. Miniatura, R. Kaiser, Phys. Rev. A 67, 033814 (2003).
25 V. M. Datsyuk, I. M. Sokolov, Journal of Experimental and Theoretical Physics, 102, 724 (2006).
26 J. Chabé M.-T. Rouabah, L. Bellando, T. Bienaimé, N. Piovella, R. Bachelard, and R. Kaiser, Phys. Rev. A 89, 043833 (2014).
27 S. Roof, Kasie Kemp, M. Havey, I. M. Sokolov, and D. V. Kupriyanov, Optics Letters 40, 1137 (2015).
28 R. T. Sutherland and F. Robicheaux, Phys. Rev. A 93, 023407 (2016).
29 V. M. Datsyuk, I. M. Sokolov, D. V. Kupriyanov, and M. D. Havey, Phys. Rev. A, vol. 74, 043812 (2006).
30 V. M. Datsyuk, I. M. Sokolov, D. V. Kupriyanov, and M. D. Havey, Phys. Rev. A, vol. 77, 033823 (2008).
|
http://arxiv.org/abs/1701.07828v2 | 20170126190002 | Two dimensional metallic phases from disordered QED$_3$ | [
"Pallab Goswami",
"Hart Goldman",
"Srinivas Raghu"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.dis-nn",
"hep-th"
] | |
http://arxiv.org/abs/1701.08032v1 | 20170127123748 | Traveling waves for degenerate diffusive equations on networks | [
"Andrea Corli",
"Lorenzo Di Ruvo",
"Luisa Malaguti",
"Massimiliano Rosini"
] | math.AP | [
"math.AP"
] |
Traveling waves for degenerate diffusive equations on networks
Andrea Corli
Department of Mathematics and Computer Science
University of Ferrara, I-44121 Italy
Lorenzo di Ruvo
Department of Sciences and Methods for Engineering
University of Modena and Reggio Emilia, I-42122 Italy
Luisa Malaguti
Department of Sciences and Methods for Engineering
University of Modena and Reggio Emilia, I-42122 Italy
Massimiliano D. Rosini
Department of Mathematics
Maria Curie-Skłodowska-University, PL-20031 Poland
December 30, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper we consider a scalar parabolic equation on a star graph; the model is quite general but what we have in mind is the description of traffic flows at a crossroad. In particular, we do not necessarily require the continuity of the unknown function at the node of the graph and, moreover, the diffusivity can be degenerate. Our main result concerns a necessary and sufficient algebraic condition for the existence of traveling waves in the graph. We also study in great detail some examples corresponding to quadratic and logarithmic flux functions, for different diffusivities, to which our results apply.
AMS Subject Classification: 35K65; 35C07, 35K55, 35K57
Keywords: Parabolic equations, wavefront solutions, traffic flow on networks.
§ INTRODUCTION
Partial differential equations on networks have been considered in the last years by several authors, in particular in the parabolic case; we quote for instance <cit.>. According to the modeling in consideration and to the type of equations on the edges of the underlying graph, different conditions at the nodes are imposed. In most of the cases, precise results of existence of solutions are given, even for rather complicated networks.
In this paper, the main example we have in mind comes from traffic modeling, where the network is constituted by a crossroad connecting m incoming roads with n outgoing roads; the traffic in each road is modeled by the scalar diffusive equation
ρ_h,t + f_h(ρ_h)_x = (D_h(ρ_h)ρ_h,x)_x, h = 1,…,m+n,
where t denotes time and x the position along the road.
In this case ρ_h is a vehicle density; about the diffusivity D_h(ρ_h)≥0 we do not exclude that it may vanish at some points. System (<ref>) is completed by a condition of flux conservation at the crossroad, which implies the conservation of the total number of cars. Such a model is derived from the famous Lighthill-Whitham-Richards equation <cit.>. We refer to <cit.> for several motivations about the introduction of (possibly degenerate) diffusion in traffic flows and in the close field of crowds dynamics. We also refer to the recent books <cit.> for more information on the related hyperbolic modeling.
We focus on a special class of solutions to (<ref>), namely, traveling waves. In the case of a single road, traveling waves are considered, for instance, in <cit.>; in the case of a second-order model without diffusion but including a relaxation term, we refer to <cit.>; for a possibly degenerate diffusion function and in presence of a source term, detailed results are given in <cit.>. In the case of a network, the papers dealing with this subject, to the best of our knowledge, are limited to <cit.> for the semilinear diffusive case and to <cit.> for the case of a dispersive equation. In these papers, as in most modeling of diffusive or dispersive partial differential equations on networks, both the continuity of the unknown functions and the Kirchhoff condition (or variants of it) are imposed at the nodes. We emphasize that while the classical Kirchhoff condition implies the conservation of the flow and then that of the mass, some variants of this condition are dissipative and, then, imply none of the conservations above. While these assumptions are natural when dealing with heat or fluid flows, they are much less justified in the case of traffic modeling, where the density must be allowed to jump at the node while the conservation of the mass must always hold. Moreover, they impose rather strong conditions on the existence of the profiles, which often amount to proportionality assumptions on the parameters in play.
In this paper we only require the conservation of the (parabolic) flux at the node, as in <cit.>; differently from that paper and the other ones quoted above, we do not impose the continuity condition. A strong motivation for dropping this condition comes from the hyperbolic modeling <cit.>; nevertheless, we show how our results simplify when such a condition is required. In particular, in Sections <ref> and <ref> we provide explicit conditions for traveling wave solutions which do not satisfy the continuity condition; in some other cases, such a condition is indeed always satisfied. Our main results are essentially of algebraic nature and concern conditions about the end states, flux functions, diffusivities and other parameters which give rise to a traveling wave moving in the network.
Here follows a plan of the paper. In Section <ref> we introduce the model and give some basic definitions; for simplicity we only focus on the case of a star graph. Section <ref> deals with a general existence result in the case of a single equation; its proof is provided in Appendix <ref>. Section <ref> contains our main theoretical results about traveling waves in a network. In that section we characterize both stationary/non-stationary and degenerate/non-degenerate waves; in particular, Theorem <ref> contains an important necessary and sufficient condition that we exploit in the following sections. Section <ref> focus on the continuity condition; in this case the conditions for the existence of traveling wave solutions are much stricter than in the previous case. Detailed applications of these results are provided in Sections <ref> for quadratic fluxes and in Section <ref> for logarithmic fluxes; in particular, in subsection <ref> and in the whole Section <ref> the diffusivity is as in <cit.>. For simplicity, we only deal there with the case of a single ingoing road but we consider both constant and degenerate diffusivities.
§ THE MODEL
In terms of graph theory, we consider a semi-infinite star-graph with m incoming and n outgoing edges; this means that the incidence vector d∈^m+n has components d_i=1 for i∈𝖨≐{ 1, …, m } and d_j=-1 for j∈𝖩≐{ m+1, …, m+n }. We also denote 𝖧≐{ 1,…, m+n } and refer to Figure <ref>. For simplicity, having in mind the example in the Introduction, we always refer to the graph as the network, to the node as the crossroad and to the edges as the roads. Then, incoming roads are parametrized by x ∈_-≐(-∞,0] and numbered by the index i, outgoing roads by x ∈_+≐[0,∞) and j; the crossroad is located at x=0 for both parameterizations. We denote the generic road by Ω_h for h ∈𝖧; then Ω_i ≐_- for i ∈𝖨 and Ω_j ≐_+ for j ∈𝖩. The network is defined as 𝒩≐∏_h∈𝖧Ω_h.
Following the above analogy, we understand the unknown functions ρ_h as vehicular densities in the roads Ω_h, h∈𝖧; ρ_h ranges in [0,_h], where _h is the maximal density in the road Ω_h. Without loss of generality we assume that _h=1 for every h∈𝖧; the general case is easily recovered by a change of variables and modifying (<ref>)-(<ref>) below for a multiplicative constant. With a slight abuse of notation we denote ρ≐ (ρ_1, …, ρ_m+n): ×𝒩→ [0,1]^m+n understanding that ρ(t,x_1,…, x_m+n)=(ρ_1(t,x_1),…,ρ_m+n(t,x_m+n)).
For each road we assign the functions f_h, the hyperbolic flux, and D_h, the diffusivity; we assume for every h ∈𝖧
(f) f_h∈𝐂^1([0,1];_+) is strictly concave with f_h(0)=f_h(1)=0;
(D) D_h∈𝐂^1([0,1];_+) and D_h(ρ)>0 for any ρ∈ (0,1).
We emphasize that in (D) we can possibly have either D_h(0)=0 or D_h(1)=0, or even both possibilities at the same time. The evolution of the flow is described by the equations
ρ_h,t + f_h(ρ_h)_x = (D_h(ρ_h)ρ_h,x)_x, (t,x) ∈ℝ×Ω_h, h ∈𝖧.
Assumption (f) is standard when dealing with traffic flows <cit.>. More precisely, in that case f_h(ρ_h)=ρ_h v_h(ρ_h), where v_h is the velocity.
Then, assumption (f) is satisfied if, for instance, v_h∈𝐂^2([0,1];_+) is either linear or strictly concave, decreasing and satisfying v_h(1) = 0, see <cit.>.
The prototype of such a velocity satisfying (f) is v_h(ρ) = V_h (1-ρ) with V_h > 0, which was introduced in <cit.>; another example is given in <cit.>.
The simplest model for the diffusivity is then D_h(ρ_h) = -δ_hρ_h v_h'(ρ_h), where δ_h is an anticipation length <cit.>.
The coupling among the differential equations in (<ref>) occurs by means of suitable conditions at the crossroad. In this paper, having in mind the previous example, we impose a condition on the conservation of the total flow at the crossroad, see <cit.>; in turn, this implies the conservation of the mass. More precisely, we define the parabolic flux by
F_h(ρ_h,ρ_h,x) ≐ f_h(ρ_h) - D_h(ρ_h) ρ_h,x
and require
F_j(ρ_j(t,0^+), ρ_j,x(t,0^+))
= ∑_i ∈𝖨α_i,j F_i(ρ_i(t,0^-), ρ_i,x(t,0^-))
for a.e. t∈, j ∈𝖩,
for given constant coefficients α_i,j∈ (0, 1] satisfying
∑_j ∈𝖩α_i,j = 1, i ∈𝖨.
Conditions (<ref>) and (<ref>) imply
∑_j ∈𝖩 F_j(ρ_j(t,0^+), ρ_j,x(t,0^+))
=
∑_i ∈𝖨 F_i(ρ_i(t,0^-), ρ_i,x(t,0^-))
for a.e. t∈,
which is the conservation of the total flow at the crossroad. Conditions (<ref>) and (<ref>) deserve some comments. First, by no means they imply
ρ_i(t,0^-) = ρ_j(t,0^+), t ∈, (i,j) ∈𝖨×𝖩.
Condition (<ref>) is largely used, together with some Kirchhoff conditions, when dealing with parabolic equations in networks and takes the name of continuity condition. Second, above we assumed α_i,j>0 for every i and j. The case when α_i,j=0 for some i and j would take into account the possibility that some outgoing j roads are not allowed to vehicles coming from some incoming i roads; this could be the case, for instance, if only trucks are allowed in road i but only cars are allowed in road j. For simplicity, we do not consider this possibility. Third, we notice that assumption (<ref>) destroys the symmetry of condition (<ref>); indeed, with reference to the example of traffic flow, the loss of symmetry is due to the fact that all velocities v_h are positive.
Then, we are faced with the system of equations (<ref>) that are coupled through (<ref>), with the α_i,j satisfying (<ref>). Solutions to (<ref>)-(<ref>) are meant in the weak sense, namely ρ_h∈𝐂^1(×Ω_h;[0,1]) a.e.; see also <cit.> for an analogous definition in the hyperbolic case. We do not impose any initial condition because we only consider traveling waves, which are introduced in the next sections.
§ TRAVELING WAVES FOR A SINGLE EQUATION
In this section we briefly remind some definitions and results about traveling waves <cit.> for the single equation
ρ_h,t + f_h(ρ_h)_x = (D_h(ρ_h)ρ_h,x)_x, (t,x) ∈ℝ×Ω_h,
where we keep for future reference the index h. Equation (<ref>) has no source terms (differently from <cit.>) and then any constant is a solution; for simplicity we discard constant solutions in the following analysis.
A weak solution ρ_h(t,x) to (<ref>) is a traveling-wave solution of (<ref>) if ρ_h(t,x) = ϕ_h(x-c_h t) for (t,x) ∈ℝ×Ω_h, for a non-constant profile ϕ_h:→[0,1] and speed c_h ∈.
This definition coincides with that given in <cit.> because we are considering non-constant profiles. The profile must satisfy the equation
[F_h(ϕ_h,ϕ_h')- c_h ϕ_h ]'=0,
namely,
(D_h(ϕ_h)ϕ_h')' - g_h'(ϕ_h) ϕ_h'=0,
in the weak sense, where
g_h(ρ) ≐ f_h(ρ) - c_h ρ
is the reduced flux, see <ref>.
This means that ϕ_h ∈0(ℝ; [0,1]), D_h(ϕ_h) ϕ_h'∈1(ℝ;ℝ) and
∫_ℝ[D_h(ϕ_h(ξ))ϕ_h'(ξ) - g_h(ϕ_h(ξ)) ] ψ'(ξ) dξ =0,
for every ψ∈∞(ℝ;ℝ). Equation (<ref>) is coupled with the limit conditions
ϕ_h(±∞) = ℓ_h^±,
for ℓ_h^±∈[0,1]. Clearly, solutions to (<ref>)-(<ref>) are determined up to a shift. We define
I_h ≐{ξ∈ℝ : ℓ_h^-<ϕ_h(ξ)<ℓ_h^+}.
The existence of profiles is a well-established result <cit.>; nevertheless, we state for completeness the following theorem, where we point out the qualitative properties of these fronts. The proof is deferred to Appendix <ref>.
Assume (f) and (D). Equation (<ref>) admits a traveling-wave solution ρ_h with profile ϕ_h satisfying (<ref>) if and only if
0≤ℓ_h^- < ℓ_h^+≤1 and c_h = f_h(ℓ_h^+) - f_h(ℓ_h^-)ℓ_h^+ - ℓ_h^-.
We have that ϕ_h ∈𝐂^2(I_h; (ℓ_h^-, ℓ_h^+)) is unique (up to shifts) and ϕ_h'(ξ)>0 for ξ∈ I_h; moreover, the following holds true.
(i) D_h(0)=0=ℓ_h^- if and only if there exists ν_h^- ∈ℝ such that I_h ⊆ (ν_h^-,∞) and ϕ_h(ξ)=0 for ξ≤ν_h^-.
In this case
lim_ξ↓ν_h^-ϕ_h'(ξ)=ℓ_h^+f_h'(0)-f_h(ℓ_h^+)/ℓ_h^+D_h'(0) if D_h'(0)>0,
∞ if D_h'(0)=0,
lim_ξ↓ν_h^-D_h(ϕ_h(ξ))ϕ_h'(ξ) = 0.
(ii) D_h(1)=0=1-ℓ_h^+ if and only if there exists ν_h^+ ∈ℝ such that I_h ⊆ (-∞,ν_h^+) and ϕ_h(ξ)=1 for ξ≥ν_h^+.
In this case
lim_ξ↑ν_h^+ϕ_h'(ξ)=(1-ℓ_h^-)f_h'(1)+f_h(ℓ_h^-)/(1-ℓ_h^-) D_h'(1) if D_h'(1)<0,
∞ if D_h'(1)=0,
lim_ξ↑ν_h^+D_h(ϕ_h(ξ))ϕ_h'(ξ) = 0.
(iii) In all the other cases I_h=ℝ and
lim_ξ→±∞ϕ_h'(ξ) = 0.
We observe that for c_h given by (<ref>), we deduce by (f) that g_h(ρ) ≥ 0 for all ρ∈[ℓ_h^-,ℓ^+_h], see <ref>. Moreover, we have
g_h(ℓ_h^+) = g_h(ℓ_h^-) =
-f_h(ℓ_h^+) ℓ_h^- - f_h(ℓ_h^-) ℓ_h^+/ℓ_h^+ - ℓ_h^-
and no ρℓ_h^± makes g_h(ρ) equal to that value.
Theorem <ref> motivates the following definition.
A traveling-wave solution ρ_h is stationary if c_h=0.
It is degenerate if at least one of conditions (i) or (ii) of Theorem <ref> holds.
A consequence of assumption (f) is that if ρ_h is degenerate, then the profile ϕ_h is singular either at ν_h^- in case (i) or at ν_h^+ in case (ii), in the sense that ϕ_h' cannot be extended to the whole of as a continuous function.
In case (i) (or (ii)) of Theorem <ref> does not hold we define ν_h^- ≐ -∞ (respectively, ν_h^+ ≐∞). In this way the interval (ν_h^-,ν_h^+) is always defined and coincides with the interval I_h defined in (<ref>):
I_h = (ν_h^-,ν_h^+).
The interval I_h is bounded if and only if both (i) and (ii) hold; in this case ρ_h is both degenerate and stationary. As a consequence, if ρ_h is non-stationary then I_h is unbounded and coincides either with a half line (if ρ_h is degenerate) or with (if ρ_h is non-degenerate).
At last, ρ_h is degenerate if and only if either ν_h^- or ν_h^+ is finite.
In the case of non-stationary traveling-wave solutions ρ_h we use the notation
ω_h ≐min{c_h^-1ν_h^-,c_h^-1ν_h^+}.
Let ρ_h be a traveling-wave solution of (<ref>); then we have the following.
(a)
If ρ_h is stationary, then it is degenerate if and only if D_h(0)D_h(1)=0 and ℓ_h^- = 0 (hence ℓ_h^+=1).
(b)
If ρ_h is non-stationary, then it is degenerate if and only if one of the following equivalent statements hold:
* either D_h(0) = 0 = ℓ_h^- or D_h(1) = 0 = 1-ℓ_h^+, but not both;
* ω_h is finite.
In this case the function ξ↦ϕ_h'(c_hξ) is singular at ξ = ω_h and 1 elsewhere.
We recall that ρ_h is degenerate if and only if either D_h(0) = 0 = ℓ_h^- or D_h(1) = 0 = 1-ℓ_h^+. This means that at least one of the end states must be 0 or 1, say 0; but then c_h=0 if and only if the other end state is 1. This proves (a) and the first part of (b).
Now, we prove the second part of (b).
Since c_h 0, exactly one between (i) and (ii) of Theorem <ref> occurs, namely, exactly one between ν_h^- and ν_h^+ is finite.
If ν_h^- is finite and ν_h^+=∞, then c_h=f_h(ℓ_h^+)/ℓ_h^+>0 and ω_h = c_h^-1ν_h^- is finite.
By Remark <ref>, we know that ξ↦ϕ_h'(ξ) is singular at ξ=ν_h^- and 1 elsewhere, whence the regularity of ξ↦ϕ_h'(c_hξ).
Analogously, if ν_h^+ is finite and ν_h^-=-∞, then c_h=-f_h(ℓ_h^-)/(1-ℓ_h^-)<0 and ω_h = c_h^-1ν_h^+ is finite. The statement about the smoothness of ξ↦ϕ_h'(c_hξ) is proved as above.
Finally, the converse is straightforward. In fact, if ω_h is finite, then either ω_h = c_h^-1ν_h^- and ν_h^- is finite, or ω_h = c_h^-1ν_h^+ and ν_h^+ is finite; in both cases ρ_h is degenerate.
Because of the smoothness properties of the profile proved in Theorem <ref>, we can integrate equation (<ref>) in (ξ_-,ξ)⊂ I_h and we obtain
c_h ϕ_h(ξ) - F_h(ϕ_h(ξ), ϕ_h'(ξ))
=
c_h ϕ_h(ξ_-) - F_h(ϕ_h(ξ_-), ϕ_h'(ξ_-)).
If ξ_-↓ν_h^- in the previous expression, by applying (<ref>) or (<ref>) we deduce
F_h(ϕ_h(ξ), ϕ_h'(ξ)) = c_h ϕ_h(ξ) + g_h(ℓ_h^±), ξ∈ I_h.
We observe that (<ref>) is trivially satisfied in case (i) when ξ < ν_h^- and in case (ii) when ξ > ν_h^+; moreover, by a continuity argument, we deduce from (<ref>) and (<ref>) that (<ref>) is satisfied in case (i) at ξ = ν_h^- and in case (ii) at ξ = ν_h^+, respectively. In conclusion, we have that (<ref>) holds in the whole , namely
D_h(ϕ_h(ξ)) ϕ_h'(ξ) = g_h(ϕ_h(ξ)) - g_h(ℓ_h^±), ξ∈.
§ TRAVELING WAVES IN A NETWORK
In this section we consider the traveling-wave solutions of problem (<ref>)-(<ref>) in the network 𝒩.
We first introduce the definition of traveling-wave solution in 𝒩.
For any h ∈𝖧, let ρ_h be a traveling-wave solution of (<ref>)_h in the sense of Definition <ref> and set ρ≐ (ρ_1, …, ρ_m+n).
With reference to Definition <ref>, we say that:
* ρ is stationary if each component ρ_h is stationary;
* ρ is completely non-stationary if none of its components is stationary;
* ρ is degenerate if at least one component ρ_h is degenerate;
* ρ is completely degenerate if each of its components is degenerate.
Finally, we say that ρ is a traveling-wave solution of problem (<ref>)-(<ref>) in the network 𝒩 if (<ref>) holds.
For brevity, from now on we simply write traveling wave for traveling-wave solution. In analogy to the notation above, we say that ϕ≐ (ϕ_1, …, ϕ_m+n) is a profile for ρ if ϕ_h is a profile corresponding to ρ_h for every h ∈𝖧.
For clarity of exposition, we collect our general results for stationary and non-stationary traveling waves in the following subsections.
§.§ General results
In this subsection, as well as in the following ones, we always assume (f) and (D) without explicitly mentioning it. Moreover, by Definition <ref> and Theorem <ref>, the end states and the speeds of the profiles must satisfy (<ref>) for every h∈𝖧; both conditions in (<ref>) are tacitly assumed as well.
The function ϕ is the profile of a traveling wave if and only if ϕ_h is a solution to (<ref>)-(<ref>) for any h ∈𝖧 and
c_j ϕ_j(c_j t) + g_j(ℓ_j^±)
=
∑_i ∈𝖨α_i,j[ c_i ϕ_i(c_i t) + g_i(ℓ_i^±) ], t ∈, j ∈𝖩.
In (<ref>) any combination of the signs ± is allowed.
By plugging ρ_h(t,x)=ϕ_h(x-c_ht) in (<ref>) and recalling that by Theorem <ref> the profiles are continuous in , we obtain
F_j(ϕ_j(- c_j t), ϕ_j'(- c_j t))
=
∑_i ∈𝖨α_i,j F_i(ϕ_i(- c_i t), ϕ_i'(- c_i t)), t∈, j∈𝖩,
which is equivalent to (<ref>) by (<ref>).
At last, we can clearly choose any combination of signs in (<ref>) because of (<ref>).
Differently from what specified in Proposition <ref>, in the following the choice of the signs “±” follows the usual rules, i.e., top with top and bottom with bottom.
Assume that problem (<ref>)-(<ref>) admits a traveling wave. Then for any j∈𝖩 we have
max{f_j(ℓ_j^-), f_j(ℓ_j^+)} = ∑_i ∈𝖨α_i,j max{f_i(ℓ_i^-), f_i(ℓ_i^+)},
min{f_j(ℓ_j^-), f_j(ℓ_j^+)} = ∑_i ∈𝖨α_i,j min{f_i(ℓ_i^-), f_i(ℓ_i^+)}.
Fix j∈𝖩.
We notice that (<ref>) is equivalent to
Υ_j(t) =
∑_i ∈𝖨α_i,jΥ_i(t), t ∈, j ∈𝖩,
where the map t ↦Υ_h(t) ≐ c_h ϕ_h(c_ht) + g_h(ℓ_h^-) is non-decreasing because the profiles are so, by Theorem <ref>. Since we can write Υ_h(t) = f_h(ℓ_h^-) + c_h [ϕ_h(c_ht) - ℓ_h^-], we see that Υ_h ranges between f_h(ℓ_h^-) and f_h(ℓ_h^+) because of (<ref>) and the fact that ξ↦ϕ_h(ξ) takes values in [ℓ_h^-,ℓ_h^+]. As a consequence,
lim_t→∞Υ_h(t) = max{f_h(ℓ_h^-),f_h(ℓ_h^+)
}, lim_t→-∞Υ_h(t) = min{f_h(ℓ_h^-),f_h(ℓ_h^+)
}.
Hence, by passing to the limit for t→±∞ in (<ref>) we obtain (<ref>) and (<ref>), respectively.
Assume that problem (<ref>)-(<ref>) admits a traveling wave. The traveling wave is stationary if and only if one of the following equivalent statements hold:
(i)
there exists j∈𝖩 such that c_ j = 0;
(ii)
c_i = 0 for all i ∈𝖨;
(iii)
c_j = 0 for all j ∈𝖩.
By subtracting (<ref>) to (<ref>) we obtain
|f_j(ℓ_j^+) - f_j(ℓ_j^-)| = ∑_i ∈𝖨α_i,j |f_i(ℓ_i^+) - f_i(ℓ_i^-)|.
Since c_h=0 if and only if f_h(ℓ_h^-) = f_h(ℓ_h^+), from the above equation we immediately deduce that (i), (ii) and (iii) are equivalent. By the equivalence of (ii) and (iii), a traveling wave is stationary if and only if one of the statements above holds.
Lemma <ref> shows that either a traveling wave is stationary, and then c_h=0 for every h∈𝖧, or it is non-stationary, and then
there exists i∈𝖨 such that c_ i0 and c_j0 for every j∈𝖩.
Of course, by Lemma <ref>, c_ i0 for some i∈𝖨 if and only if c_j0 for every j∈𝖩.
Fix ℓ_i^±∈ [0,1] with ℓ_i^- < ℓ_i^+, i ∈𝖨.
Then for any j∈𝖩 there exist ℓ_j^±∈ [0,1] with ℓ_j^- < ℓ_j^+ and satisfying (<ref>)-(<ref>) if and only if
max_[0,1]f_j > ∑_i ∈𝖨α_i,j max{f_i(ℓ_i^-), f_i(ℓ_i^+)} if c_1=…=c_m=0,
max_[0,1]f_j ≥∑_i ∈𝖨α_i,j max{f_i(ℓ_i^-), f_i(ℓ_i^+)} otherwise.
In this case, the end states ℓ_j^± are uniquely determined if and only if c_i=0 for every i ∈𝖨.
Assume that there exist ℓ_j^±∈ [0,1], with ℓ_j^- < ℓ_j^+, which satisfy (<ref>)-(<ref>). Then clearly we have max_[0,1]f_j ≥∑_i ∈𝖨α_i,j max{f_i(ℓ_i^-), f_i(ℓ_i^+)}. If c_i=0 for every i∈𝖨, then we have c_j=0 for every j∈𝖩 by Lemma <ref>; the equality max_[0,1]f_j= f(ℓ_j^-)=f(ℓ_j^+) would imply ℓ_j^-=ℓ_j^+ because of (f), a contradiction, and then max_[0,1]f_j> f(ℓ_j^-)=f(ℓ_j^+). This proves (<ref>).
Conversely, assume (<ref>). If c_i=0 for every i∈𝖨, then ℓ_j^-<ℓ_j^+ are uniquely determined because of the strict concavity of f_j. Assume, on the contrary, that c_ i0 for some i∈𝖨; then c_j0 by Lemma <ref>, i.e., f_j(ℓ_j^-) f_j(ℓ_j^+). Thus (<ref>)-(<ref>) determine exactly four possible choices of end states ℓ_j^± with ℓ_j^- < ℓ_j^+, see Figure <ref>.
By Proposition <ref> and Lemma <ref> we deduce that the end states ℓ_j^± are uniquely determined in terms of the end states ℓ_i^± if and only if the traveling wave is stationary and the first condition in (<ref>) holds.
We now give an algebraic result about determining the end states of the outgoing profiles in terms of the end states of the ingoing ones. We introduce
L_i,j^±≐ℓ_i^± if c_i c_j ≥ 0,
ℓ_i^∓ if c_i c_j <0.
Assume that problem (<ref>)-(<ref>) admits a traveling wave. Then for any j∈𝖩 we have
f_j(ℓ_j^±) =
∑_i ∈𝖨α_i,j f_i(L_i,j^±).
Moreover, (<ref>) is equivalent to (<ref>)-(<ref>).
Fix j∈𝖩.
By Lemma <ref> it is sufficient to prove that (<ref>) is equivalent to (<ref>)-(<ref>).
If c_j>0, and then f_j(ℓ_j^+) > f_j(ℓ_j^-), by (<ref>) we have
max{f_i(ℓ_i^-), f_i(ℓ_i^+)} =
f_i(ℓ_i^+) if c_i≥0
f_i(ℓ_i^-) if c_i<0
=f_i(L_i,j^+),
min{f_i(ℓ_i^-), f_i(ℓ_i^+)} =
f_i(ℓ_i^-) if c_i≥0
f_i(ℓ_i^+) if c_i<0
=f_i(L_i,j^-),
and therefore (<ref>) is equivalent to (<ref>)-(<ref>).
The case c_j<0 is analogous.
If c_j=0, then f_j(ℓ_j^+) = f_j(ℓ_j^-) and by Lemma <ref> we have f_i(ℓ_i^+) = f_i(ℓ_i^-) for any i ∈𝖨. In this case formulas (<ref>)-(<ref>) reduce to a single equation, which coincides with (<ref>).
§.§ The stationary case
In this short subsection we briefly consider stationary traveling waves.
Problem (<ref>)-(<ref>) admits infinitely many stationary traveling waves; such waves are characterized by the conditions on the end states
f_h(ℓ_h^+) = f_h(ℓ_h^-), f_j(ℓ_j^-) = ∑_i ∈𝖨α_i,j f_i(ℓ_i^-) for h∈𝖧, j∈𝖩.
Clearly, (<ref>) is trivially satisfied if ℓ_h^-=0 and ℓ_h^+ = 1 for all h ∈𝖧. We claim that there exist infinitely many choices of ℓ_1^±,…,ℓ_m+n^± satisfying (<ref>). To prove the claim, we choose ℓ_i^±∈ [0,1], with ℓ_i^- < ℓ_i^+, such that f_i(ℓ_i^-) = f_i(ℓ_i^+) are sufficiently small to satisfy the first condition in (<ref>) for all j ∈𝖩.
Then, by a continuity argument, we can choose ℓ_j^±∈ [0,1] so that ℓ_j^- < ℓ_j^+ and f_j(ℓ_j^-) = f_j(ℓ_j^+) = ∑_i ∈𝖨α_i,j f_i(ℓ_i^-). This proves the claim.
With this choice of the end states, by Theorem <ref> we deduce the existence of a stationary traveling wave in each road satisfying (<ref>). At last we notice that, in the stationary case, condition (<ref>) is equivalent to the latter condition in (<ref>).
Clearly, if both D_h(0)0 and D_h(1)0 for every h∈𝖧, then problem (<ref>)-(<ref>) admits no degenerate traveling wave. However, even in the general case, the proof of Theorem <ref> shows that (<ref>)-(<ref>) admits infinitely many non-degenerate stationary traveling waves: just choose 0ℓ_h^-<ℓ_h^+1 satisfying (<ref>). Moreover, if there exists h∈𝖧 such that either D_ h(0)=0 or D_ h(1)=0, then (<ref>)-(<ref>) admits also infinitely many degenerate stationary traveling waves: just choose ℓ_ h^-=0 = 1 - ℓ_ h^+ and determine the other end states by (<ref>).
§.§ The non-stationary case
In this subsection we consider non-stationary traveling waves.
By Lemma <ref> this is equivalent to consider the scenario in (<ref>): there exists i∈𝖨 such that f_ i(ℓ_ i^-) f_ i(ℓ_ i^+) and f_j(ℓ_j^-) f_j(ℓ_j^+) for every j∈𝖩.
We can therefore introduce the following notation:
c_i,j≐c_i/c_j, A_i,j≐α_i,j c_i,j,
k_j ≐∑_i ∈[ A_i,j L_i,j^±] - ℓ_j^±, κ_j
≐ c_jk_j,
where L_i,j is defined in (<ref>) and
≐{i∈𝖨:c_i=0} = {i∈𝖨:f_i(ℓ_i^-) = f_i(ℓ_i^+)}, ≐𝖨∖.
We notice that ∅ by (<ref>) and that both and depend on the end states ℓ_i^±, i∈𝖨, indeed.
Moreover, k_j is well defined because by (<ref>)
∑_i ∈ A_i,j[ L_i,j^+ - L_i,j^- ] =
∑_i ∈α_i,j c_j^-1[ f_i(L_i,j^+) - f_i(L_i,j^-) ] =
c_j^-1[ f_j(ℓ_j^+) - f_i(ℓ_j^-) ] = ℓ_j^+-ℓ_j^-.
Finally, by (f) we deduce that
for no j ∈𝖩 we have both ℓ_j^-=0=1-ℓ_j^+.
The function ϕ is the profile of a non-stationary traveling wave if and only if ϕ_h is a solution to (<ref>)-(<ref>) for any h ∈𝖧 and
ϕ_j(ξ) =
∑_i ∈[ A_i,j ϕ_i(c_i,j ξ) ]
- k_j, ξ∈, j∈𝖩.
By Proposition <ref> it is sufficient to prove that by (<ref>) condition (<ref>) is equivalent to (<ref>).
By (<ref>) we have g_i(ℓ_i^+) = g_i(ℓ_i^-) = g_i(L_i,j^+) = g_i(L_i,j^-) and then by (<ref>) we have κ_j = g_j(ℓ_j^±) - ∑_i ∈𝖨α_i,j g_i(L_i,j^±).
Hence, by (<ref>), with the change of variable ξ = c_jt, condition (<ref>) is
c_j ϕ_j(ξ) =
- g_j(ℓ_j^±) + ∑_i ∈𝖨α_i,j [ c_i ϕ_i(c_i,jξ) + g_i(L_i,j^±) ]
=
∑_i ∈𝖨α_i,j c_i ϕ_i(c_i,jξ) - κ_j,
that is equivalent to (<ref>).
We observe that k_j and (<ref>) can be written in a little bit more explicit form by avoiding the use of L_i,j^± as follows
k_j = ∑_i ∈[ A_i,j ℓ_i^- + ℓ_i^+/2] - ℓ_j^- + ℓ_j^+/2,
ϕ_j(ξ)
=
ℓ_j^- + ℓ_j^+/2
+
∑_i ∈ A_i,j[ ϕ_i(c_i,j ξ) - ℓ_i^- + ℓ_i^+/2].
Proposition <ref> shows how each outgoing profile ϕ_j can be expressed by (<ref>) in terms of the ingoing profiles ϕ_i, i∈𝖨.
We know a priori that ϕ_j is increasing and its end states are contained in the interval [0,1].
Now, we prove a sort of converse implication, which shows that these properties of the profile ϕ_j are enjoined by the function defined by the right-hand side of (<ref>).
Let ϕ_i, for i∈𝖨, be the profiles provided by Theorem <ref> and assume that ∅; fix j∈𝖩 and consider any l_j^±∈[0,1] satisfying (<ref>) and such that, for the corresponding c_j, it holds c_j0.
Then l_j^- < l_j^+. Moreover, denote by ℓ_j(ξ) the right-hand side of (<ref>); then ξ↦ℓ_j(ξ) is non-decreasing and ℓ_j(±∞)=l_j^±.
Since by Theorem <ref> we know that ℓ_i^-<ℓ_i^+, then by (<ref>)
l_j^+-l_j^- =
c_j^-1[ f_j(l_j^+) - f_j(l_j^-) ] =
∑_i ∈𝖨α_i,j c_j^-1[ f_i(L_i,j^+) - f_i(L_i,j^-) ]
=
∑_i ∈𝖨α_i,j c_i,j[L_i,j^+- L_i,j^-] =
∑_i ∈α_i,j |c_i,j| [ℓ_i^+ - ℓ_i^-] > 0.
By definition of ℓ_j we have
ℓ_j'(ξ) =
∑_i ∈α_i,j c_i,j^2 ϕ_i'(c_i,j ξ)
for a.e. ξ∈, hence ξ↦ℓ_j(ξ) is non-decreasing since all profiles ϕ_i do.
Moreover, ℓ_j(±∞) = l_j^± because by the definitions of ℓ_j and κ_j we have
c_j ℓ_j(±∞) =
∑_i ∈[ α_i,j c_i L_i,j^±]
- κ_j =
c_j l_j^±.
We notice that Proposition <ref> exploits condition (<ref>) through its expression (<ref>) for the profiles; the diffusivities D_h are not involved in (<ref>). Indeed, Proposition <ref> imposes strong necessary conditions on the diffusivities as we discuss now as a preparation to (<ref>).
We notice that if both ν_h^- and ν_h^+ are finite, then ℓ_h^-=0=1-ℓ_h^+ and consequently c_h=0; therefore either ν_h^- or ν_h^+ (possibly both) is infinite for any h∈∪𝖩.
The following result is similar to Lemma <ref>.
Problem (<ref>)-(<ref>) admits a degenerate non-stationary traveling wave ρ if and only if at least one of the following conditions holds:
(A) for some i ∈ we have D_i(0) D_i(1)= 0 and ℓ_i^- = 0 (hence ℓ_i^+=1);
(B) for every h ∈∪𝖩 we have either D_h(0) = 0 = ℓ_h^- or D_h(1) = 0 = ℓ_h^+-1, but not both. In this case we have
ω_i = ω_j≐ω, i∈, j ∈𝖩.
Let us introduce the following conditions:
(B)' for some i ∈ we have either D_i(0) = 0 = ℓ_i^- or D_i(1) = 0 = ℓ_i^+-1, but not both;
(B)” for some j ∈𝖩 we have either D_j(0) = 0 = ℓ_j^- or D_j(1) = 0 = ℓ_j^+-1, but not both.
Clearly (B) implies both (B)' and (B)”.
Moreover, by Lemma <ref> and (<ref>), problem (<ref>)-(<ref>) admits a degenerate non-stationary traveling wave ρ if and only if at least one of the conditions (A), (B)' and (B)” holds.
To complete the proof it is therefore sufficient to show that (B), (B)' and (B)” are equivalent.
By Lemma <ref>(b) and (<ref>), the conditions (B), (B)' and (B)” are respectively equivalent to
(I) ω_h is finite for every h ∈∪𝖩,
(II) for some i∈ we have that ω_i is finite,
(III) for some j∈𝖩 we have that ω_j is finite,
where ω_h is defined in (<ref>).
Differentiating (<ref>) gives
ϕ_j'(c_jξ) =
∑_i ∈α_i,j c_i,j^2 ϕ_i'(c_iξ) for a.e. ξ∈, j ∈𝖩.
More precisely, by Lemma <ref>, formula (<ref>) holds for ξ∈∖({ω_j}∪⋃_i∈ω_i); moreover, by the same lemma we know that ξ↦ϕ_h'(c_hξ) is singular at ξ = ω_h and 1 elsewhere, for h∈∪𝖩.
Hence, (<ref>) implies (<ref>). By (<ref>) we have that the above statements (I), (II) and (III) are equivalent and then also (B), (B)' and (B)” are so.
As for Lemma <ref>, we notice that Lemma <ref> implies that a non-stationary traveling wave ρ is either non-degenerate, and then ρ_h is non-degenerate for every h ∈𝖧, or ρ is degenerate, and then either there exists i∈ such that ρ_ i is degenerate, or ρ_h is degenerate for all h ∈∪𝖩.
In both cases a non-stationary traveling wave ρ satisfies (<ref>).
When modeling traffic flows it is natural to use different diffusivities, which however share some common properties.
For instance, this led to consider in <cit.> the following subcase of (D):
(D1) D_h satisfies (D) and D_h(0)=0, D_h(1)>0, for every h∈𝖧.
The proof of the following result is an immediate consequence of Lemma <ref> and, hence, omitted.
Assume that problem (<ref>)-(<ref>) has a non-stationary traveling wave ρ and (D1) holds. Then ρ is degenerate if and only at least one of the following conditions holds:
(A) for some i ∈ we have ℓ_i^- = 0 (hence ℓ_i^+ = 1);
(B) for every h ∈∪𝖩 we have ℓ_h^- = 0 (hence ℓ_h^+ ≠ 1).
The case when D_h satisfies (D) and D_h(0)=0=D_h(1) for every h∈𝖧, see <cit.>, can be dealt analogously.
The next result is the most important of this paper; there, we give necessary and sufficient conditions for the existence of non-stationary traveling waves in a network. About its statement, let us recall Theorem <ref>: we have ϕ_h'(ξ)=0 in case (i) if ξ<ν_h^- or in case (ii) if ξ>ν_h^+. Since ϕ_h satisfies equation (<ref>), we are led to extend the quotient ℓ↦g_h(ℓ)-g_h(ℓ_h^-)/D_h(ℓ) to the whole of by defining
γ_h(ℓ) ≐g_h(ℓ)-g_h(ℓ_h^-)/D_h(ℓ) if D_h(ℓ)0,
0 if D_h(ℓ)=0.
In fact, when ℓ is replaced by ϕ_h(ξ), then γ_h(ℓ)=ϕ_h'(ξ) for ξ∈∖{ν_h^-, ν_h^+}. We remark that condition D_h(ℓ)=0 occurs at most when either ℓ=0 or ℓ=1. To avoid the introduction of the new notation (<ref>), in the following we simply keep on writing g_h(ℓ)-g_h(ℓ_h^-)/D_h(ℓ) for γ_h(ℓ). As a consequence, any non-stationary traveling wave of problem (<ref>)-(<ref>) satisfies
ϕ_h'(ξ)
=
g_h(ϕ_h(ξ)) - g_h(ℓ_h^-)D_h(ϕ_h(ξ)), ξ∈∖{ν_h^-, ν_h^+}, h ∈𝖧.
Assume conditions (f) and (D).
Problem (<ref>)-(<ref>) admits a non-stationary traveling wave if and only if the following condition holds.
(𝒯) There exist ℓ_1^±,…,ℓ_m^±∈ [0,1] with ℓ_i^- < ℓ_i^+, i∈𝖨, such that:
(i) ∅;
(ii) for any j ∈𝖩 there exist ℓ_j^±∈[0,1] satisfying (<ref>) and such that f_j(ℓ_j^-) f_j(ℓ_j^+);
(iii) for any j ∈𝖩 we have
g_j(ℓ_j(c_j ξ)) - g_j(ℓ_j^-)D_j(ℓ_j(c_j ξ))
=
∑_i ∈ A_i,j c_i,j g_i(ϕ_i( c_i ξ)) - g_i(ℓ_i^-)D_i(ϕ_i( c_i ξ)) for a.e. ξ∈,
where ϕ_1,…,ϕ_m are solutions to (<ref>)-(<ref>) and, for k_j as in (<ref>),
ℓ_j(ξ) ≐∑_i ∈[ A_i,j ϕ_i(c_i,j ξ) ]
- k_j, ξ∈.
First, assume that problem (<ref>)-(<ref>) admits a non-stationary traveling wave ρ with profiles ϕ_h, end states ℓ_h^± and speeds c_h, for h∈𝖧. By Theorem <ref> we have that ℓ_h^± and c_h satisfy (<ref>).
By Proposition <ref> the profiles ϕ_h satisfy (<ref>)-(<ref>) and (<ref>).
The end states ℓ_j^±, j∈𝖩, satisfy (<ref>) by Proposition <ref>.
Since ρ is non-stationary we are in the scenario given by (<ref>): ∅ and f_j(ℓ_j^-) f_j(ℓ_j^+) for all j ∈𝖩.
By (<ref>) with h = j we have
ϕ_j'(c_jξ)
=
g_j(ϕ_j(c_jξ)) - g_j(ℓ_j^-)D_j(ϕ_j(c_jξ))
for ξ∈ in the non-degenerate case and for ξ∈∖{ω} with ω given by (<ref>) in the degenerate case.
On the other hand, by differentiating (<ref>) and applying (<ref>) with h = i we deduce
ϕ_j'(ξ) =
∑_i ∈ A_i,j c_i,j ϕ_i'( c_i,jξ)
=
∑_i ∈ A_i,j c_i,j g_i(ϕ_i( c_i,jξ)) - g_i(ℓ_i^-)D_i(ϕ_i( c_i,jξ))
for ξ∈ in the non-degenerate case and for ξ∈∖{ω} with ω given by (<ref>) in the degenerate case.
Identity (<ref>) follows because ℓ_j ≡ϕ_j by (<ref>) and by comparing (<ref>), (<ref>).
Conversely, assume that condition (𝒯) holds.
We remark that the existence of ϕ_i, i ∈𝖨, is assured by Theorem <ref>.
Fix j∈𝖩.
By defining ϕ_j ≐ℓ_j we obtain (<ref>).
We know by assumption that ∅, ℓ_j^±∈ [0,1] satisfy (<ref>) and c_j 0; we can apply therefore Lemma <ref> and deduce that ℓ_j^-<ℓ_j^+, ϕ_j is non-decreasing and satisfies (<ref>) with h=j.
By Proposition <ref>, what remains to prove is that ϕ_j satisfies (<ref>).
But by (<ref>) we deduce (<ref>) for a.e. ξ∈, because ϕ_1,…,ϕ_m satisfy (<ref>) and hence, recalling the extension (<ref>), also (<ref>); then by (<ref>) we conclude that ϕ_j satisfies (<ref>) for a.e. ξ∈ and then (<ref>) for a.e. ξ∈.
Finally, (<ref>) holds by the regularity ensured by Theorem <ref> for the profiles.
Fix ℓ_i^±∈ [0,1], i ∈𝖨, so that ℓ_i^- < ℓ_i^+ and (<ref>) holds. We know by Proposition <ref> that for every j∈𝖩 there exists (ℓ_j^-,ℓ_j^+) ∈ [0,1]^2, with ℓ_j^- < ℓ_j^+, that satisfies (<ref>), but it is not unique. If beside (<ref>) we impose also (<ref>), then we may have three possible scenarios: such (ℓ_j^-,ℓ_j^+) either does not exist, or it exists and is unique, or else it exists but is not unique. We refer to Subsections <ref> and <ref> for further discussion.
§ THE CONTINUITY CONDITION
In this section we discuss the case when solutions to (<ref>)-(<ref>) are also required to satisfy the continuity condition (<ref>); this makes the analysis much easier because (<ref>) implies several strong conditions.
First, we provide the main results about traveling waves satisfying condition (<ref>). We point out that some of the consequences below have already been pointed out in <cit.> in the case that some Kirchhoff conditions replace the conservation of the total flow (<ref>). In order to emphasize the consequences of the continuity condition (<ref>), the first two parts of the following lemma do not assume that also condition (<ref>) holds.
For any h ∈𝖧, let ρ_h be a traveling wave of (<ref>)_h in the sense of Definition <ref> and set ρ≐ (ρ_1, …, ρ_m+n); then the following holds for every (i,j) ∈𝖨×𝖩 and h∈𝖧.
(i) ρ satisfies (<ref>) if and only if
ϕ_j(c_jt) = ϕ_i(c_i t) ≐Φ(t), t∈.
(ii) If ρ satisfies (<ref>), then either it is stationary (hence (<ref>) reduces to ϕ_j(0) = ϕ_i(0)), or it is completely non-stationary and the speeds c_h have the same sign (hence c_i,j>0).
In the latter case, ρ is either non-degenerate or completely degenerate; moreover
(c_j^-1I_j) = (c_i^-1I_i) ≐ℐ,
ℓ_j^± =ℓ_i^±=L_i,j^±≐ℓ^±,
c_jg_j(ℓ)-g_j(ℓ^±)/D_j(ℓ) = c_ig_i(ℓ)-g_i(ℓ^±)/D_i(ℓ), ℓ∈(ℓ^-,ℓ^+).
(iii)
If ρ is non-stationary and satisfies both (<ref>) and (<ref>), then
c_j =∑_i∈𝖨α_i,jc_i, ∑_j∈𝖩c_j =∑_i∈𝖨c_i, κ_j=0, ∑_i ∈𝖨 A_i,j = 1.
We split the proof according to the items in the statement.
* Condition (<ref>) and (<ref>) are clearly equivalent.
* Since we are discarding constant profiles, by (<ref>) we have that either c_h=0 for all h∈𝖧 or c_h0 for all h∈𝖧. The stationary case is trivial; therefore we consider below only the non-stationary case and assume that c_h0 for all h∈𝖧.
By differentiating (<ref>) with respect to t we deduce
c_jϕ_j'(c_jt) = c_iϕ_i'(c_i t) for a.e. t∈.
Then (<ref>) implies that either ρ is non-degenerate or it is completely degenerate.
Moreover (<ref>) implies (<ref>) because, by Lemma <ref>, we have that ρ_h is degenerate if and only if the map ξ↦ϕ_h'(c_h ξ) is singular at ξ = ω_h ∈ and 1 elsewhere.
By taking t ∈ℐ in (<ref>) we deduce that c_i and c_j have the same sign. As a consequence we have L_i,j^±=ℓ_i^± and then ℓ_i^± = ℓ_j^± by letting t→±∞ in (<ref>).
By (<ref>), (<ref>) and (<ref>) we have D_h(Φ(ξ)) Φ'(ξ) = c_h [g_h(Φ(ξ)) - g_h(ℓ^±)] for all h∈𝖧, whence (<ref>) by the extension (<ref>).
* To deduce (<ref>)_1, we differentiate (<ref>) and then exploit (<ref>). Formula (<ref>)_2 follows by summing (<ref>)_1 with respect to j and by (<ref>).
By (<ref>)_1 we have ∑_i ∈𝖨 A_i,j = ∑_i ∈𝖨α_i,j c_i c_j^-1=1, which proves (<ref>)_4.
Finally, (<ref>) and (<ref>)_4 imply (<ref>)_3.
In the following proposition we deal with stationary traveling waves satisfying condition (<ref>).
Problem (<ref>)-(<ref>) admits infinitely many stationary traveling waves satisfying (<ref>); their end states ℓ_h^± satisfy (<ref>) and are such that S ≐⋂_h∈𝖧(ℓ_h^-,ℓ_h^+)∅.
By (<ref>) condition (<ref>) holds in the stationary case if and only if ϕ_i(0)=ϕ_j(0) for (i,j)∈𝖨×𝖩.
Recalling the proof of Theorem <ref>, it is sufficient to take ℓ_h^±∈ [0,1] satisfying (<ref>) and such that S ∅, ℓ_0 ∈ S and the unique solution ϕ_h to (<ref>)-(<ref>) such that ϕ_h(0) = ℓ_0. There are infinitely many of such profiles because of the arbitrariness of ℓ_h^±.
We point out that condition S=∅ can occur if the functions f_h assume their maximum values at different points. This is not the case when the following condition (<ref>)_1 is assumed.
The following result is analogous to Theorem <ref> in the case (<ref>) holds.
Assume conditions (f) and (D). Problem (<ref>)-(<ref>) admits a (completely) non-stationary traveling wave satisfying (<ref>) if and only if the following condition holds.
(𝒯_c) There exist ℓ^±∈ [0,1] with ℓ^- < ℓ^+, such that for any h∈𝖧, i ∈𝖨 and j ∈𝖩
f_h(ℓ^-) f_h(ℓ^+),
f_j(ℓ^±) =∑_i ∈𝖨α_i,jf_i(ℓ^±),
c_j g_j(ϕ_j(c_j t)) - g_j(ℓ^-)D_j(ϕ_j(c_j t)) =
c_i g_i(ϕ_i(c_i t)) - g_i(ℓ^-)D_i(ϕ_i(c_i t)) for a.e. t∈,
where c_h is given by (<ref>), ϕ_h is a solution to (<ref>) such that φ_h(±∞) = ℓ^± and φ_1(0) = … = φ_m+n(0).
Assume that condition (𝒯_c) holds; the other implication is obvious.
We remark that the existence of ϕ_1,…,ϕ_m+n is assured by Theorem <ref>; indeed, for any ℓ_0 ∈ (ℓ^-,ℓ^+), up to shifts it is always possible to assume that φ_h(0) = ℓ_0, h ∈𝖧.
By (<ref>) we have (<ref>)_4 because
∑_i ∈𝖨 A_i,j = ∑_i ∈𝖨α_i,jf_i(ℓ^+) - f_i(ℓ^-)/f_j(ℓ^+) - f_j(ℓ^-) = 1.
By (<ref>) we have that = 𝖨, = ∅ and ρ corresponding to the profile φ≐ (φ_1, …, φ_m+n) is completely non-stationary.
Then (<ref>)_4 and (<ref>) imply (<ref>)_3, namely k_j = 0.
By Lemma <ref> (i) and Proposition <ref> it remains to prove (<ref>) and (<ref>).
We start with (<ref>).
Clearly (<ref>) holds for t=0 because ϕ_h(0) = ℓ_0, h∈𝖧.
Then by the extension (<ref>) and (<ref>) we have
d/ d t[ ϕ_j(c_jt) - ϕ_i(c_i t) ]
=
c_j g_j(ϕ_j(c_j t)) - g_j(ℓ^-)D_j(ϕ_j(c_j t))
-
c_i g_i(ϕ_i(c_i t)) - g_i(ℓ^-)D_i(ϕ_i(c_i t))
= 0.
Therefore we conclude that (<ref>) holds.
Finally, (<ref>) follows immediately from (<ref>), (<ref>)_3 and (<ref>)_4.
Consider in particular the case when the functions f and D satisfy (f) and (D), respectively, and assume that
f_h(ℓ) ≐ v_h f(ℓ), D_h(ℓ) ≐δ_h D(ℓ), ℓ∈ [0,1],
for some constants v_h,δ_h>0. Denote
v_i,j≐v_i/v_j, δ_i,j≐δ_i/δ_j.
We notice that now we have
v_i,j=c_i,j.
In the following proposition we apply Theorem <ref> when (<ref>) is assumed; in this case conditions (<ref>) and (<ref>) no longer depend on the end states and the statement is somewhat simplified.
Assume (<ref>) with f and D satisfying (f) and (D), respectively. Problem (<ref>)-(<ref>) admits a (completely) non-stationary traveling wave satisfying (<ref>) if and only if for every i∈𝖨 and j∈𝖩 we have
v_i,j^2=δ_i,j and ∑_i ∈𝖨α_i,j v_i,j = 1.
We only need to translate condition (𝒯_c) to the current case. Let ℓ^±∈ [0,1] with ℓ^-<ℓ^+ and f(ℓ^-) f(ℓ^+).
By (<ref>) it is obvious that (<ref>) is satisfied.
If ℓ^-=0 or ℓ^+=1 condition (<ref>) is satisfied by (f). In all the other cases (<ref>) is equivalent to
∑_i ∈𝖨α_i,j v_i,j = 1
by (<ref>).
Similarly, condition (<ref>) reduces to c_jv_jδ_j^-1=c_iv_iδ_i^-1 and hence, by (<ref>), it is equivalent to v_i,j^2=δ_i,j.
Remark that by (<ref>) condition (<ref>)_2 is equivalent to (<ref>)_4.
§ APPLICATION TO THE CASE OF A QUADRATIC FLUX,
In this section we assume (<ref>) for some constants v_h,δ_h>0, D satisfying (D) and the quadratic flux <cit.>
f(ρ) ≐ρ (1-ρ),
with no further mention.
The case when only (<ref>)_1 holds is doable and follows with slight modifications. We use the notation introduced in (<ref>).
For simplicity, in the whole section we focus on the case m=1, see Figure <ref>, even without explicitly mentioning it. Then 𝖨={1}, 𝖩={2,…,n+1}, 𝖧={1,2,…, n+1}. The general case m>1 offers no further difficulties than heavier calculations.
In this case, condition (<ref>) becomes
0≤ℓ_h^- < ℓ_h^+≤1 and c_h = v_h [1-ℓ_h^+ - ℓ_h^-].
In particular, by (<ref>)_2
ρ_h is stationary ℓ_h^++ℓ_h^-=1.
Moreover, g_h(ℓ) = v_h ℓ [ℓ_h^+ + ℓ_h^ - - ℓ] implies
g_h(ℓ) - g_h(ℓ_h^±) = v_h (ℓ_h^+ - ℓ) (ℓ - ℓ_h^-),
and therefore (<ref>) becomes
δ_h D(ϕ_h(ξ)) ϕ_h'(ξ) =
v_h [ℓ_h^+-ϕ_h(ξ)] [ϕ_h(ξ)-ℓ_h^-], ξ∈.
We first consider stationary traveling waves and specify Theorem <ref> and Proposition <ref> in the current framework.
We define the intervals
ℒ_j^0≐[0,1/2) if α_1,j v_1,j≤ 1,
[0, 1 - √(1-α_1,j^-1 v_1,j^-1)/2) if α_1,j v_1,j > 1,
j ∈𝖩.
Problem (<ref>)-(<ref>) admits infinitely many stationary traveling waves; their end states are characterized by the conditions
ℓ_1^- ∈⋂_j ∈𝖩ℒ_j^0, ℓ_1^+ + ℓ_1^- = 1, ℓ_j^± = 1/2[
1±√(1-4 α_1,j v_1,j ℓ_1^+ ℓ_1^-)], j ∈𝖩.
Moreover, up to shifts, any stationary traveling wave satisfies (<ref>).
The first part of the proposition follows from Theorem <ref>.
Indeed, conditions (<ref>), (<ref>)_1 and (<ref>) are satisfied if and only if for any h∈𝖧 and j∈𝖩
ℓ_h^- ∈ [0, 1/2), ℓ_h^+ + ℓ_h^- = 1, ℓ_j^- (1 - ℓ_j^-) = α_1,j v_1,j ℓ_1^- (1 - ℓ_1^-);
then it is sufficient to compute ℓ_j^± and to observe that the definition of ℒ_j^0 guarantees that they are real numbers.
The latter part of the proposition is deduced by Proposition <ref> because 1/2 ∈S≐⋂_h ∈𝖧(ℓ_h^-, ℓ_h^+) ∅.
In the following we treat the existence of non-stationary traveling waves. Since m=1, by Lemma <ref> this is equivalent to assume c_h 0 for h ∈𝖧, namely, the traveling wave is completely non-stationary.
By (<ref>), (<ref>) and (<ref>), from (<ref>) we deduce
c_1,j =v_1,j1-ℓ_1^+-ℓ_1^-/1-ℓ_j^+-ℓ_j^-, A_1,j =α_1,jv_1,j1-ℓ_1^+-ℓ_1^-/1-ℓ_j^+-ℓ_j^-,
k_j =
A_1,j L_1,j^± - ℓ_j^±, κ_j =v_jℓ_j^-ℓ_j^+-α_1,jv_1ℓ_1^-ℓ_1^+.
The following result translates Theorem <ref> to the present case.
We define the intervals
ℒ_j^c≐
[0,1] if α_1,j v_1,j≤ 1,
[0,1] ∖(1 - √(1-α_1,j^-1 v_1,j^-1)/2 , 1 + √(1-α_1,j^-1 v_1,j^-1)/2) if α_1,j v_1,j > 1,
j ∈𝖩.
Problem (<ref>)-(<ref>) admits a (completely) non-stationary traveling wave if and only if the following condition holds.
(𝒯_q) There exist ℓ_1^±∈ [0,1] with ℓ_1^-<ℓ_1^+ such that:
(i) ℓ_1^++ℓ_1^- 1;
(ii) ℓ_1^±∈⋂_j ∈𝖩ℒ_j^c;
(iii) for any j ∈𝖩 we have
D(ℓ)= α_1,j δ_1,j/v_1,j D(ℓ+k_j/A_1,j), ℓ∈(ℓ_j^-,ℓ_j^+),
where k_j is defined in (<ref>) with ℓ_j^± being solutions to
ℓ_j^±(1-ℓ_j^±)=α_1,jv_1,jL_1,j^±(1-L_1,j^±).
The proof consists in showing that, in the present case, condition (𝒯) of Theorem <ref> is equivalent to (𝒯_q).
∙ The first item of (𝒯) is clearly equivalent to the first item of (𝒯_q).
∙ We prove now that the second item of (𝒯) is equivalent to the second item of (𝒯_q).
“⇒”
Assume that for any j ∈𝖩 there exist ℓ_j^±∈[0,1] satisfying (<ref>) and such that f_j(ℓ_j^-) f_j(ℓ_j^+).
Fix j ∈𝖩.
Clearly (<ref>) is equivalent to (<ref>).
If we denote z_1,j^±≐ 4α_1,jv_1,jL_1,j^±(1-L_1,j^±), then the ℓ_j^±-solutions to (<ref>) are, see <ref>,
ℓ_j^- = 1/2[1-√(1-z_1,j^-)],
ℓ_j^+ ∈{1/2[1±√(1-z_1,j^+)]
},
if c_j>0, ℓ_j^- ∈{1/2[1±√(1-z_1,j^-)]
},
ℓ_j^+ = 1/2[1+√(1-z_1,j^+)],
if c_j<0.
The square roots in (<ref>) are real numbers if and only if z_1,j^±≤1, namely,
ℓ_1^±(1-ℓ_1^±) ≤ (4α_1,jv_1,j)^-1.
It is easy to see that the above estimate is equivalent to require ℓ_1^±∈ℒ_j^c.
“⇐”
Assume that ℓ_1^±∈⋂_j ∈𝖩ℒ_j^c and fix j ∈𝖩.
The square roots in (<ref>) are then real numbers and ℓ_j^± given in (<ref>) satisfy (<ref>), namely (<ref>).
Obviously ℓ_j^± belong to [0,1].
Finally, since ℓ_j^± are solutions to (<ref>), it is easy to see that f_j(ℓ_j^+) f_j(ℓ_j^-) because f_1(ℓ_1^+) f_1(ℓ_1^-).
∙ We prove now that (𝒯) implies the last item of (𝒯_q).
Since the first two items in (𝒯) are equivalent to the first two items in (𝒯_q), we can assume that ℓ_1^++ℓ_1^- 1, ℓ_1^±∈⋂_j ∈𝖩ℒ_j^c and that for any j ∈𝖩 we have (<ref>), namely,
(ℓ_j^+-ℓ_j(c_j ξ)) (ℓ_j(c_j ξ)-ℓ_j^-)D(ℓ_j(c_j ξ))
=
A_1,j c_1,j v_1,j/δ_1,j (ℓ_1^+-ϕ_1( c_1 ξ)) (ϕ_1( c_1 ξ)-ℓ_1^-)D(ϕ_1( c_1 ξ))
for a.e. ξ∈, where ϕ_1 is a solution to (<ref>)-(<ref>) and
ℓ_j(ξ) ≐ A_1,j[ ϕ_1(c_1,j ξ)-L_1,j^±]+ ℓ_j^±, ξ∈.
We point out that the above expression of ℓ_j is deduced from (<ref>) by applying (<ref>); moreover (<ref>) is deduced from (<ref>) by applying (<ref>).
Recall that both fractions in (<ref>) are meant as in (<ref>).
Since
(ℓ_j^+-ℓ_j(c_j ξ)) (ℓ_j(c_j ξ)-ℓ_j^-) =
A_1,j^2 ( L_1,j^+ - ϕ_1(c_1 ξ) ) ( ϕ_1(c_1 ξ) - L_1,j^- ),
(ℓ_1^+-ϕ_1( c_1 ξ)) (ϕ_1( c_1 ξ)-ℓ_1^-) =
(L_1,j^+-ϕ_1( c_1 ξ)) (ϕ_1( c_1 ξ)-L_1,j^-),
we have that (<ref>) is equivalent to
D(ℓ_j(c_jξ)) = α_1,jδ_1,jv_1,j D(ϕ_1(c_1 ξ)) for a.e. ξ∈.
To conclude now that the above condition is equivalent to (<ref>) it is sufficient to recall that by Lemma <ref> the continuous function ξ↦ℓ_j(ξ) is increasing and ℓ_j(±∞) = ℓ_j^± and that ℓ_j(ξ) = A_1,j ϕ_1(c_1,j ξ) - k_j by (<ref>).
∙ Finally, to prove that (𝒯_q) implies the last item of (𝒯) it is enough to trace backwards the proof of the previous item.
We notice that if D is a polynomial with degree 𝔡, then (<ref>) is equivalent to 𝔡 + 1 conditions on the parameters, see for instance (<ref>) and (<ref>).
We point out that by Proposition <ref> we have that problem (<ref>)-(<ref>) admits a (completely) non-stationary traveling wave satisfying (<ref>) if and only if
v_1,j^2=δ_1,j and α_1,j v_1,j = 1, j∈𝖩.
The special cases of constant or linear diffusivities are treated in the following subsections.
§.§ The case of constant diffusivities
In this subsection we assume
D ≐ 1,
and in this case problem (<ref>)-(<ref>) reduces to
δ_h ϕ_h'(ξ) = v_h [ℓ_h^+-ϕ_h(ξ)][ϕ_h(ξ)-ℓ_h^-], ξ∈,
ϕ_h(±∞) = ℓ_h^±.
For any h∈𝖧, the function
ψ_h(ξ)
≐ℓ_h^+1+e^-v_h/δ_h[ℓ_h^+-ℓ_h^-] ξ
+
ℓ_h^-1+e^v_h/δ_h[ℓ_h^+-ℓ_h^-] ξ
solves (<ref>) because ℓ_h^- <ℓ_h^+; all the other solutions are of the form ϕ_h(ξ)=ψ_h(ξ+σ_h) for σ_h ∈. Notice that ψ_h(0)=(ℓ_h^+ + ℓ_h^-)/2.
We rewrite Proposition <ref> in the current setting; we emphasize that the shifts appear below because in this case we have the explicit solution (<ref>) to problem (<ref>).
Assume (<ref>).
Problem (<ref>)-(<ref>) admits a (completely) non-stationary traveling wave if and only if
α_1,jδ_1,j=v_1,j.
In this case any non-stationary traveling wave ρ has a profile φ of the form
ϕ(ξ)=(ψ_1(ξ +σ_1), …, ψ_n+1(ξ+σ_n+1)), ξ∈,
with ℓ_h^± satisfying (i), (ii) and (<ref>) in Proposition <ref> and σ_h∈, h∈𝖧, such that
c_jσ_1 =c_1σ_j, j∈𝖩.
By Theorem <ref>, any solution to (<ref>) has the form (<ref>) with σ_h∈, h∈𝖧. Therefore, by Proposition <ref> it only remains to prove that (<ref>) is equivalent to (<ref>)-(<ref>).
Straightforward computations show that in the present case (<ref>) can be written as
f_j(ℓ_j^+) ζ_j(t) + f_j(ℓ_j^-)/1+ζ_j(t) = α_1,j f_1(ℓ_1^+) ζ_1(t) + f_1(ℓ_1^-)/1+ζ_1(t), t ∈, j ∈𝖩,
where ζ_h(t) ≐exp z_h(t), for z_h(t) ≐v_h/δ_h (ℓ_h^+-ℓ_h^-) (c_ht+σ_h), h ∈𝖧.
By Proposition <ref> we have
either f_j(ℓ_j^±) = α_1,jf_1(ℓ_1^±), or f_j(ℓ_j^±) = α_1,jf_1(ℓ_1^∓).
∙ In the former case, identity (<ref>) is equivalent to
[ f_j(ℓ_j^+) - f_j(ℓ_j^-) ] [ζ_j(t)-ζ_1(t) ]=0,
t ∈, j ∈𝖩.
Since by assumption f_j(ℓ_j^+) ≠ f_j(ℓ_j^-), it must be ζ_j ≡ζ_1, i.e., z_j(t) = z_1(t), namely
v_j/δ_j (ℓ_j^+-ℓ_j^-) c_j = v_1/δ_1 (ℓ_1^+-ℓ_1^-) c_1,
v_j/δ_j (ℓ_j^+-ℓ_j^-) σ_j = v_1/δ_1 (ℓ_1^+-ℓ_1^-) σ_1,
⇔ v_1,j/δ_1,j = f_j(ℓ_j^+) - f_j(ℓ_j^-)/f_1(ℓ_1^+) - f_1(ℓ_1^-) = α_1,j,
σ_j/c_j = σ_1/c_1.
∙ In the latter case, identity (<ref>) is equivalent to
[ f_j(ℓ_j^+) - f_j(ℓ_j^-) ][ζ_j(t)ζ_1(t)-1]=0,
t ∈, j ∈𝖩.
Since by assumption f_j(ℓ_j^+) ≠ f_j(ℓ_j^-), it must be ζ_j ζ_1 ≡ 1, i.e. z_j(t) = -z_1(t), namely
v_j/δ_j (ℓ_j^+-ℓ_j^-) c_j =- v_1/δ_1 (ℓ_1^+-ℓ_1^-) c_1,
v_j/δ_j (ℓ_j^+-ℓ_j^-) σ_j =- v_1/δ_1 (ℓ_1^+-ℓ_1^-) σ_1,
⇔ v_1,j/δ_1,j = -f_j(ℓ_j^+) - f_j(ℓ_j^-)/f_1(ℓ_1^+) - f_1(ℓ_1^-) = α_1,j,
σ_j/c_j = σ_1/c_1.
In both cases we proved that (<ref>) is equivalent to (<ref>)-(<ref>); this concludes the proof.
Consider conditions (<ref>)_1, (<ref>)_2 and (<ref>). Any two of them implies the third one.
Assume (<ref>).
Problem (<ref>)-(<ref>) admits a (completely) non-stationary traveling wave satisfying (<ref>) if and only if (<ref>) holds true.
In this case a non-stationary traveling wave satisfies (<ref>) if and only if its end states satisfy (<ref>).
The first part of the statement is just Remark <ref>. In this case, since (<ref>) implies (<ref>), by Proposition <ref> any (completely) non-stationary traveling wave ρ has a profile of the form (<ref>)-(<ref>).
The second part of the statement characterizes the end states. If a non-stationary traveling wave ρ satisfies (<ref>), then (<ref>) holds because of Lemma <ref>. Conversely, if the end states of ρ satisfy (<ref>), then long but straightforward computations show that (<ref>) holds true, and therefore ρ satisfies (<ref>).
§.§ The case of linear diffusivities
In this subsection we assume
D(ρ) ≐ρ.
We notice that D degenerates at 0 and this makes the subject more interesting. In this case problem (<ref>)-(<ref>) reduces to
δ_hφ_hφ'_h =
v_h (ℓ^+_h - φ_h) (φ_h - ℓ^-_h), ξ∈,
φ_h(±∞)=ℓ_h^±.
If ℓ_h^-=0, then the function
ψ_h(ξ) ≐ℓ_h^+/2(2-e^-v_h/δ_hξ) if ξ≥-δ_h/v_hln2,
0 if ξ<-δ_h/v_hln2,
solves (<ref>) because ℓ_h^- <ℓ_h^+; by (<ref>) we have I_h=(-δ_h/v_hln2,∞). If ℓ_h^->0, then I_h= and the function ψ_h implicitly given by
[
2 exp( v_hδ_h ξ) ψ_h(ξ)-ℓ_h^-/ℓ_h^+-ℓ_h^-]^ℓ_h^-
=
[
2 exp( v_hδ_h ξ) ℓ_h^+-ψ_h(ξ)/ℓ_h^+-ℓ_h^-]^ℓ_h^+
solves (<ref>) because ℓ_h^- <ℓ_h^+.
Notice that in both cases ψ_h(0)=(ℓ_h^+ + ℓ_h^-)/2 and all the other solutions are of the form ϕ_h(ξ)=ψ_h(ξ+σ_h) for σ_h ∈.
Hence, any non-stationary traveling wave ρ has a profile φ of the form
ϕ(ξ)=(ψ_1(ξ +σ_1), …, ψ_n+1(ξ+σ_n+1)), ξ∈.
In the sequel we prove that the shifts σ_h, h ∈𝖧, satisfy (<ref>), or equivalently
v_1,j σ_1 =δ_1,j σ_j, j∈𝖩.
Assume (<ref>). If ℓ_1^+ + ℓ_1^-1, then condition (<ref>) is equivalent to
v_1,j^2/δ_1,j = 1-ℓ_j^+-ℓ_j^-/1-ℓ_1^+-ℓ_1^- and ℓ_j^- ℓ_j^+ = α_1,j v_1,j ℓ_1^- ℓ_1^+.
In the present case, condition (<ref>) becomes
(c_1,jv_1,j - δ_1,j) ℓ - δ_1,j k_j = 0 for ℓ∈ (ℓ_j^-,ℓ_j^+):
it is satisfied if and only if both c_1,jv_1,j = δ_1,j and k_j=0.
The former is equivalent to (<ref>)_1, the latter is equivalent to (<ref>)_2 by (<ref>)_4, because κ_j=0.
We observe that (<ref>)_1 and (<ref>)_1 imply that c_1,j = δ_1,j/v_1,j > 0; therefore (<ref>) becomes
ℓ_j^±(1- ℓ_j^±)=α_1,j v_1,j ℓ_1^±(1- ℓ_1^±), j ∈𝖩.
As a consequence ρ is either non-degenerate or completely degenerate.
Now, we discuss (completely) non-stationary traveling waves by considering separately the (completely) degenerate and non-degenerate case.
We denote
Δ_j ≐{α_1,j δ_1,j, √(δ_1,j), √(α_1,j δ^2_1,j)}, j ∈𝖩.
Assume (<ref>).
Problem (<ref>)-(<ref>) admits a traveling wave that is both (completely) degenerate and (completely) non-stationary if and only if either (<ref>) holds true or
[ 0 < v_1,j < minΔ_j or v_1,j > maxΔ_j, j ∈𝖩,; v_1,2 (δ_1,2-v_1,2^2)α_1,2 δ_1,2^2-v_1,2^3 =
… =
v_1,n+1 (δ_1,n+1-v_1,n+1^2)α_1,n+1 δ_1,n+1^2-v_1,n+1^3. ]
In the first case, problem (<ref>)-(<ref>) has infinitely many of such waves; each of them satisfies (<ref>) and (up to shifts) (<ref>).
In the second case, problem (<ref>)-(<ref>) has a unique (up to shifts) such wave, which does not satisfy (for no shifts) (<ref>). Its end states do not satisfy (<ref>) and are
ℓ_1^- = 0 = ℓ_j^-, ℓ_1^+ = v_1,j (δ_1,j-v_1,j^2)/α_1,j δ_1,j^2-v_1,j^3, ℓ_j^+ = α_1,j δ_1,j (δ_1,j-v_1,j^2)/α_1,j δ_1,j^2-v_1,j^3, j ∈𝖩.
In both cases, any degenerate non-stationary traveling wave ρ has a profile φ of the form (<ref>) with ψ_h defined by (<ref>) and σ_h∈, h∈𝖧, satisfying (<ref>).
We claim that the existence of a degenerate non-stationary traveling wave is equivalent to the existence of ℓ_h^+∈ (0,1), h ∈𝖧, such that
ℓ^+_j=α_1,j δ_1,j/v_1,j ℓ_1^+ and [ α_1,j δ_1,j^2 - v_1,j^3 ] ℓ_1^+
+ v_1,j[v_1,j^2 - δ_1,j] = 0, j∈𝖩.
In fact, by Proposition <ref> the existence of a non-stationary traveling wave is equivalent to condition (𝒯_q), where (<ref>) can be written as (<ref>) by Lemma <ref> and (<ref>) as (<ref>).
Then, (<ref>) and (<ref>) with ℓ_h^-=0, h ∈𝖧, reduce to the relation among the end states
v_1,j^2/δ_1,j=1-ℓ_j^+/1-ℓ_1^+, ℓ_j^+ (1 - ℓ_j^+) = α_1,j v_1,j ℓ_1^+ (1 - ℓ_1^+), j ∈𝖩.
By (<ref>) we obtain v_1,j^2/δ_1,j = α_1,j v_1,j ℓ_1^+/ℓ_j^+ and then (<ref>)_1; by plugging (<ref>)_1 into (<ref>)_2 we get (<ref>)_2 and then the claim.
Assume there is a degenerate non-stationary traveling wave; then ℓ_1^+ satisfies (<ref>)_2. As a consequence, we have either α_1,jδ^2_1,j- v^3_1,j=v_1,j^2-δ_1,j=0 or α_1,jδ^2_1,j v^3_1,j for every j ∈𝖩. The former case is equivalent to (<ref>). In the latter case we can explicitly compute ℓ_1^+ by (<ref>)_2 for any j ∈𝖩 and impose the constraint 0<ℓ_1^+<1, namely,
0< v_1,j (δ_1,j-v_1,j^2)α_1,j δ_1,j^2-v_1,j^3 < 1.
A direct computation shows that this is equivalent to (<ref>). In conclusion, either condition (<ref>) or (<ref>) is necessary for the existence of a non-stationary traveling wave with ℓ_1^-=0.
Conversely, assume condition (<ref>). In this case α_1,j δ_1,j^2 =α_1,j v_1,j^4=v_1,j^3. Then (<ref>)_2 is trivially satisfied for every ℓ_1^+∈ (0,1) and from (<ref>)_1 we deduce ℓ_1^+=ℓ_j^+. Hence, there is an infinite family of non-stationary traveling waves parameterized by ℓ_1^+ ∈ (0,1) and satisfying (<ref>); as a consequence, they do not coincide up to shifts and they all satisfy (up to shifts) the continuity condition (<ref>).
Assume now condition (<ref>). In this case the values for ℓ_1^+ and ℓ_j^+ in (<ref>) are well defined since v^3_1,jα_1,jδ_1,j and they are the unique solution to (<ref>). In particular, condition (ii) in Proposition <ref> is automatically satisfied. By the estimates in (<ref>) we have ℓ_1^+,ℓ_j^+ ∈ (0,1) for j ∈𝖩. Hence, there is a unique (up to shifts) degenerate non-stationary traveling wave and its end states satisfy (<ref>). Furthermore, by Lemma <ref>, condition (<ref>) implies (<ref>), which is precluded by (<ref>). Hence, the traveling wave does not satisfy (<ref>).
At last, by Theorem <ref>, any solution to (<ref>) has the form (<ref>).
By (<ref>), that in the present case becomes
ϕ_j'(c_jξ) = α_1,j c_1,j^2 ϕ_1'(c_1ξ) for a.e. ξ∈, j ∈𝖩,
and the regularity of ψ_h defined in (<ref>), we have
1/c_j[ δ_j/v_j ln 2 + σ_j ]
=
1/c_1[ δ_1/v_1 ln 2 + σ_1 ],
which is equivalent to (<ref>) because c_1,j = δ_1,j/v_1,j.
The following result treats the non-degenerate case.
Assume (<ref>).
Problem (<ref>)-(<ref>) admits a non-degenerate (completely) non-stationary traveling wave if and only if condition (<ref>) is satisfied. In this case any non-degenerate non-stationary traveling wave satisfies (up to shifts) (<ref>); moreover, it has a profile φ of the form (<ref>) with ψ_h implicitly defined by (<ref>) and σ_h∈, h∈𝖧, satisfying (<ref>).
Assume that there is a non-degenerate non-stationary traveling wave;
then ℓ_h^- 0 and 1 ℓ_h^+ + ℓ_h^-, h ∈𝖧. Moreover, by Proposition <ref>, condition (𝒯_q) is satisfied, where (<ref>) becomes (<ref>) by Lemma <ref> and (<ref>) is (<ref>).
When dividing (<ref>) by (<ref>)_2 we obtain
(1-ℓ_j^+) ℓ_1^- = (1-ℓ_1^+) ℓ_j^- and (1-ℓ_j^-) ℓ_1^+ = (1-ℓ_1^-) ℓ_j^+, j ∈𝖩.
By adding the above relations we have ℓ_1^- + ℓ_1^+ = ℓ_j^- + ℓ_j^+, hence
0 = ℓ_1^+ - ℓ_j^+ + ℓ_1^- - ℓ_j^-
= ℓ_1^+ - 1 + (1-ℓ_1^+) ℓ_j^-/ℓ_1^- + ℓ_1^- - ℓ_j^-
= 1-ℓ_1^+-ℓ_1^-/ℓ_1^- (ℓ_j^- - ℓ_1^-).
It is now easy to conclude that (<ref>) is satisfied and then also (<ref>) holds true by (<ref>). At last, the traveling wave satisfies (up to shifts) (<ref>) by Remark <ref>.
Conversely, assume (<ref>).
Then (<ref>) and (<ref>) write
ℓ_j^+ + ℓ_j^- = ℓ_1^+ + ℓ_1^-, ℓ_j^- ℓ_j^+ =ℓ_1^- ℓ_1^+, ℓ_j^±(1- ℓ_j^±)=ℓ_1^±(1- ℓ_1^±), j ∈𝖩.
The same computations as before give that if we impose ℓ_h^- 0 and 1 ℓ_h^+ + ℓ_h^-, h ∈𝖧, then the above conditions are equivalent to (<ref>); the existence of infinitely many non-degenerate non-stationary traveling waves satisfying (<ref>) easily follows.
At last, by Theorem <ref>, any solution to (<ref>) has the form (<ref>).
Fix j ∈𝖩.
By (<ref>) we have (<ref>), namely
ψ_j(c_jt+σ_j) = ψ_1(c_1t+σ_1), t ∈.
This identity together with (<ref>) and (<ref>) imply
[
2 exp( v_jδ_j (c_jt+σ_j) ) ψ_1(c_1t+σ_1)-ℓ^-/ℓ^+-ℓ^-]^ℓ^-
=
[
2 exp( v_jδ_j (c_jt+σ_j) ) ℓ^+-ψ_1(c_1t+σ_1)/ℓ^+-ℓ^-]^ℓ^+,
[
2 exp( v_1δ_1 (c_1t+σ_1) ) ψ_1(c_1t+σ_1)-ℓ^-/ℓ^+-ℓ^-]^ℓ^-
=
[
2 exp( v_1δ_1 (c_1t+σ_1) ) ℓ^+-ψ_1(c_1t+σ_1)/ℓ^+-ℓ^-]^ℓ^+.
By dividing the above equalities and taking the logarithm we get
[ v_jδ_j (c_jt+σ_j) - v_1δ_1 (c_1t+σ_1) ] ℓ^- = [ v_jδ_j (c_jt+σ_j) - v_1δ_1 (c_1t+σ_1) ] ℓ^+, t ∈.
Since ℓ^- ℓ^+ and c_1,j = δ_1,j/v_1,j, the above equality is equivalently to (<ref>).
§ APPLICATION TO THE CASE OF A LOGARITHMIC FLUX,
In this section we assume (<ref>) for some constants v_h,δ_h>0, D ≐ 1 and the logarithmic flux <cit.> defined by
f(ρ) ≐ -ρln(ρ)
for ρ∈(0,1] with f(0)=0 by continuity; in the following we simply write ρln (ρ) for ρ∈ [0,1]. We use the notation introduced in (<ref>); then, in the present case the diffusivity D_h coincides with the anticipation length δ_h of <cit.>, see Section <ref>. As in Section <ref>, we focus on the case m=1 and do not mention in the following these assumptions on f_h, D_h and m.
Condition (<ref>) becomes
0≤ℓ_h^- < ℓ_h^+≤1 and c_h=-v_hℓ^+_hln(ℓ^+_h)-ℓ^-_hln(ℓ^-_h)/ℓ^+_h-ℓ^-_h.
Moreover we have, for h ∈𝖧,
g_h(ℓ)=
v_hℓ[ℓ^+_hln(ℓ^+_h)-ℓ^-_hln(ℓ^-_h)/ℓ^+_h-ℓ^-_h - ln(ℓ)],
g_h(ℓ) - g_h(ℓ_h^±)=
v_h [ (ℓ-ℓ_h^-)ℓ^+_hln(ℓ^+_h)+(ℓ_h^+-ℓ)ℓ^-_hln(ℓ^-_h)/ℓ^+_h-ℓ^-_h - ℓln(ℓ)].
Therefore (<ref>) becomes
ϕ_h'(ξ)=v_h/δ_h[ [ϕ_h(ξ) - ℓ_h^-] ℓ^+_h ln(ℓ^+_h) + [ℓ_h^+ - ϕ_h(ξ)] ℓ^-_h ln(ℓ^-_h)/ℓ^+_h - ℓ^-_h
- ϕ_h(ξ) ln(ϕ_h(ξ)) ],
for ξ∈. Let :[0,e^-1] → [0,e^-1] and :[0,e^-1] → [e^-1,1] be the inverse functions of the restrictions f_ℓ and f_r of f to [0, e^-1] and [e^-1, 1], respectively.
We first consider the case of stationary waves. We define the intervals
ℒ_j^0≐
[0, e^-1) if α_1,j v_1,j≤ 1,
[0, (e^-1α_1,j^-1 v_1,j^-1) ) if α_1,j v_1,j > 1,
j ∈𝖩.
Problem (<ref>)-(<ref>) admits infinitely many stationary traveling waves; their end states are characterized by the conditions
ℓ_1^- ∈⋂_j ∈𝖩ℒ_j^0, ℓ_1^+ = (-ℓ_1^-ln (ℓ_1^-)),
ℓ_j^- = (-α_1,j v_1,jℓ_1^-ln (ℓ_1^-) ), ℓ_j^+ = (-α_1,j v_1,jℓ_1^-ln (ℓ_1^-)), j ∈𝖩.
Moreover, up to shifts, any stationary traveling wave satisfies (<ref>).
The first part of the proposition follows from Theorem <ref>.
Indeed, conditions (<ref>)_1 and (<ref>) are satisfied if and only if for any h∈𝖧 and j∈𝖩
ℓ_h^- ∈ [0, e^-1), ℓ_h^-ln(ℓ_h^-)=ℓ_h^+ln(ℓ_h^+), ℓ_j^- ln( ℓ_j^-)= α_1,j v_1,j ℓ_1^- ln( ℓ_1^-).
Hence ℓ_1^+=(-ℓ_1^-ln(ℓ_1^-)) and it is sufficient to determine ℓ_j^±. Observe that the definition of ℒ_j^0 guarantees that they can be uniquely computed.
At last, the latter part of the proposition follows by the proof of Proposition <ref> since e^-1∈S≐⋂_h∈𝖧(ℓ_h^-,ℓ_h^+) ∅.
In the following we discuss the existence of non-stationary traveling waves. Since m=1, by Lemma <ref> this is equivalent to assume that the traveling wave is completely non-stationary. By (<ref>)_2 we deduce
c_1,j = v_1,j ℓ_1^+ln(ℓ_1^+)-ℓ_1^-ln(ℓ_1^-)/ℓ_j^+ln(ℓ_j^+)-ℓ_j^-ln(ℓ_j^-) ℓ_j^+-ℓ_j^-/ℓ_1^+-ℓ_1^-.
The following result translates Theorem <ref> to the current framework. We define the intervals
ℒ_j^c≐
[0,1] if α_1,j v_1,j≤ 1,
[0,1] ∖((e^-1α_1,j^-1 v_1,j^-1) , (e^-1α_1,j^-1 v_1,j^-1)) if α_1,j v_1,j > 1,
j ∈𝖩.
Problem (<ref>)-(<ref>) admits a (completely) non-stationary traveling wave if and only if the following condition holds.
(𝒯_l) There exist ℓ_1^±∈ [0,1] with ℓ_1^-<ℓ_1^+ such that:
(i) ℓ_1^-ln(ℓ_1^-)ℓ_1^+ ln(ℓ_1^+);
(ii) ℓ_1^±∈⋂_j ∈𝖩ℒ_j^c;
(iii) for any j ∈𝖩 we have
δ_1,j[ g_j(ℓ)- g_j(ℓ_j^-) ] = A_1,j c_1,j[ g_1( ℓ+k_j/A_1,j)-g_1(ℓ_1^-) ], ℓ∈ (ℓ^-_j, ℓ^+_j),
where g_h is given in (<ref>), c_1,j in (<ref>), A_1,j in (<ref>)_2 and k_j in (<ref>)_3, with ℓ_j^± being solutions to
ℓ_j^±ln(ℓ_j^±)=α_1,jv_1,jL_1,j^±ln(L_1,j^±).
The proof consists in showing that, in the present case, (𝒯) of Theorem <ref> is equivalent to (𝒯_l). The first two items in (𝒯) and (𝒯_l) are clearly equivalent.
It remains to discuss the third one. Condition (<ref>) is equivalent to
δ_1,j[g_j(ℓ_j(c_jξ))- g_j(ℓ_j^-)] = A_1,j c_1,j[ g_1(ϕ_1(c_1ξ))-g_1(ℓ_1^-)], ξ∈,
where ϕ_1 is a solution to (<ref>)-(<ref>) and ℓ_j(ξ) ≐ A_1,jϕ_1(c_1,jξ)-k_j for c_1,j in (<ref>), A_1,j in (<ref>)_2 and k_j in (<ref>)_3. By Theorem <ref>, ϕ_1 is strictly increasing and so is the function ℓ_j. Put ℓ≐ℓ_j(c_jξ). Hence ℓ∈ (ℓ_j^-, ℓ_j^+), by Lemma <ref>, and then (<ref>) is equivalent to (<ref>).
In the following we focus on the case of (completely) non-stationary traveling waves with ℓ_h^-=0 for some h ∈𝖧.
Assume that problem (<ref>)-(<ref>) admits a traveling wave. The following statements are equivalent:
(i)
ℓ_1^- = 0;
(ii)
ℓ_j^- = 0 for all j ∈𝖩;
(iii)
there exists j∈𝖩 such that ℓ_ j^- = 0.
First, we prove that (i) implies (ii).
Fix j ∈𝖩.
Since ℓ_1^-=0, then condition (<ref>) implies that either ℓ_j^-=0 or ℓ_j^+=1, for j ∈𝖩. Assume by contradiction that ℓ_j^+=1.
Since c_1,j<0, condition (<ref>) becomes
ℓ_j^-ln(ℓ_j^-)=α_1,j v_1,jℓ_1^+ln(ℓ_1^+).
Therefore, by (<ref>), (<ref>)_2 and (<ref>)_3 we have that
c_1,j = - v_1,j ℓ_1^+ln(ℓ_1^+)/ℓ_j^-ln(ℓ_j^-) 1-ℓ_j^-/ℓ_1^+
= - 1-ℓ_j^-/α_1,j ℓ_1^+, A_i,j = - 1-ℓ_j^-/ℓ_1^+,
k_j = -1.
Condition (<ref>) can be written as
ℓln(ℓ) - v_1,j (1-ℓ) [ α_1,j ℓ_1^+ln(ℓ_1^+)/1-ℓ_j^-
+ 1-ℓ_j^-/α_1,j δ_1,j ℓ_1^+ ln( 1-ℓ/1-ℓ_j^-) ]=0,
for ℓ∈ (ℓ_j^-, 1).
By differentiating the above equation three times we obtain
-v_1,j(1-ℓ_j^-)/α_1,jδ_1,jℓ_1^+(1-ℓ)^2 = 1/ℓ^2, ℓ∈ (ℓ_j^-, 1).
This is a contradiction because the two sides have opposite sign. This proves (ii).
Since the implication (ii) ⇒ (iii) is obvious, it remains to show that (iii) ⇒ (i). Let ℓ_ j^-=0 for some j∈𝖩. By (<ref>) it follows that either ℓ_1^-=0 or ℓ_1^+=1. In the latter case by arguing as above it is easy to obtain a contradiction and then (iii) follows.
At last, we give a result which is similar to the one given in Proposition <ref>. We denote
Δ_j ≐{α_1,j δ_1,j, √(δ_1,j)}, j∈𝖩.
By Lemma <ref> we have either ℓ_h^-=0, h ∈𝖧, or ℓ_h^-0, h ∈𝖧.
Below we consider the first case.
Problem (<ref>)-(<ref>) admits a (completely) non-stationary traveling wave with ℓ_h^-=0, h ∈𝖧, if and only if either (<ref>) holds true or
[ 0 < v_1,j < minΔ_j or v_1,j > maxΔ_j, j ∈𝖩,; [α_1,2δ_1,2/v_1,2]^δ_1,2/v_1,2^2 - δ_1,2 =
… =
[α_1,n+1δ_1,n+1/v_1,n+1]^δ_1,n+1/v_1,n+1^2 - δ_1,n+1. ]
In the first case, problem (<ref>)-(<ref>) has infinitely many of such waves; each of them satisfies (<ref>) and (up to shifts) (<ref>).
In the second case, problem (<ref>)-(<ref>) has a unique (up to shifts) such wave and such wave, which does not satisfy (for no shifts) (<ref>). Its end states are
ℓ_1^-=0=ℓ_j^-, ℓ_1^+
=
[α_1,jδ_1,j/v_1,j]^δ_1,j/v_1,j^2 - δ_1,j,
ℓ_j^+ = [α_1,jδ_1,j/v_1,j]^v_1,j^2/v_1,j^2 - δ_1,j, j ∈𝖩,
and do not satisfy (<ref>).
Fix j ∈𝖩.
Since c_1,j > 0, by (<ref>) the formulas in (<ref>) and (<ref>) become
c_1,j=v_1,j ln(ℓ_1^+)/ln(ℓ_j^+),
A_1,j=α_1,j v_1,j ln(ℓ_1^+)/ln(ℓ_j^+),
k_j=0=κ_j,
g_h(ℓ) = v_h ℓ ln(ℓ_h^+/ℓ),
g_h(0)=0.
Hence (<ref>) can be written as
ℓ_j^+ ln(ℓ_j^+) = α_1,j v_1,j ℓ_1^+ ln(ℓ_1^+)
and therefore (<ref>) becomes
[ δ_1,j - v_1,j ℓ_j^+/α_1,j ℓ_1^+] ln( ℓ_j^+/ℓ) = 0, ℓ∈ (0, ℓ_j^+),
namely
ℓ_j^+ = α_1,j δ_1,j/v_1,j ℓ_1^+.
System (<ref>)-(<ref>) admits a solution if and only if either (<ref>) or (<ref>) holds true. In the former case, (<ref>)-(<ref>) has infinitely many solutions and they satisfy (<ref>); in the latter, the unique solution of (<ref>)-(<ref>) is (<ref>)_2,3. We examine separately these cases.
Assume (<ref>). In this case condition (𝒯_l) of Proposition <ref> with ℓ_1^-=0=ℓ_j^- is equivalent to ℓ_1^+=ℓ_j^+ ∈ (0,1), j ∈𝖩, and then there are infinitely many traveling waves. They all satisfy (<ref>) by Remark <ref>.
Assume (<ref>). In this case condition (𝒯_l) of Proposition <ref> with ℓ_1^-=0=ℓ_j^- is equivalent to ℓ_h^+ ∈ (0,1), h ∈𝖧, satisfying (<ref>)-(<ref>), namely to (<ref>)-(<ref>).
In particular, (<ref>)_1, (<ref>) imply that ℓ_j^+ and ℓ_1^+ are distinct, namely they do not satisfy (<ref>).
Moreover, by Remark <ref> the traveling wave does not satisfies (<ref>).
At last, the reverse implications are direct consequences of previous discussion about the solutions of (<ref>)-(<ref>) and then the proof is complete.
§ PROOF OF THEOREM <REF>
Let ℓ^±_h ∈ [0, 1] with ℓ^-_hℓ^+_h. We introduce the change of variable
r_h ≐ℓ^+_h-ρ_h/ℓ^+_h-ℓ^-_h,
which implies ρ_h = ℓ^+_h - (ℓ^+_h-ℓ^-_h) r_h,
ρ_h,t = -(ℓ^+_h - ℓ^-_h) r_h,t and
ρ_h,x = -(ℓ^+_h - ℓ^-_h) r_h,x.
Consequently, equation (<ref>) can be written
r_h,t + G_h(r_h)_x = (E_h(r_h) r_h,x)_x,
where
G_h(r_h) ≐ -f_h(ℓ^+_h - (ℓ^+_h-ℓ^-_h) r_h)-f_h(ℓ^+_h)/ℓ^+_h -ℓ^-_h, E_h(r_h)≐ D_h(ℓ^+_h - (ℓ^+_h-ℓ^-_h) r_h).
Furthermore, equation (<ref>) has a wavefront solution ψ_h from 1 to 0 with wave speed θ_h if and only if equation (<ref>) has a wavefront solution ϕ_h from ℓ^-_h to ℓ^+_h with the same speed. Notice that ψ_h satisfies the equation
(E_h(ψ_h)ψ'_h)' +(θ_h - G'_h(ψ_h)) ψ'_h=0
and ϕ_h is obtained by ψ_h by the change of variable (<ref>), i.e.
ϕ_h(ξ) = (ℓ^-_h -ℓ^+_h) ψ_h(ξ) + ℓ^+_h, ξ∈ℝ.
We discuss now the existence of a wavefront solution r_h(t,x)=ψ_h(x-θ_h t+σ_h)=ψ_h(ξ) of (<ref>).
In order to make use of <cit.>, we only need to show that
-G_h(r_h)>-r_h G_h(1), r_h ∈ (0,1).
By the definition of G_h we have
-r_h G_h(1) = -r_h f_h(ℓ^+_h)-f_h(ℓ^-_h)/ℓ^+_h-ℓ^-_h.
Then, inequality (<ref>) is equivalent to
f_h(ℓ^+_h) - (f_h(ℓ^+_h)-f(ℓ^-_h)) r_h < f_h(ℓ^+_h - (ℓ^+_h-ℓ^-_h) r_h), for r_h ∈ (0,1),
if and only if ℓ_h^-<ℓ_h^+. By the strict concavity of f_h the last inequality is satisfied and then, by
<cit.>, we deduce the existence of wavefront solutions ψ_h from 1 to 0 for (<ref>). The wave speed, in this case, is θ_h ≐ G_h(1).
Furthermore, the profile ψ_h is unique up to shifts and, if ψ_h(0)≐ν for some 0<ν< 1, then
ψ_h(ξ)=1 for ξ≤ν_h^-,
∫_ψ_h(ξ)^νE_h(s)/-G_h(s)+s G_h(1)=ξ for ν_h^-<ξ<ν_h^+ ,
ψ_h(ξ)=0 for ξ≥ν_h^+,
where
ν_h^+ ≐∫_0^νE_h(s)/-G_h(s)+ s G_h(1) d s, ν_h^- ≐ - ∫_ν^1E_h(s)/-G_h(s)+ s G_h(1) ds.
Notice that, by differentiating (<ref>) in the interval (ν_h^-, ν_h^+), we obtain that
E_h(ψ_h(ξ))/G_h(ψ_h(ξ))-ψ_h(ξ) G_h(1) ψ_h'(ξ)=1, ξ∈ (ν_h^-, ν_h^+),
which implies ψ_h'<0 in (ν_h^-, ν_h^+) because of (<ref>).
Consider now ϕ_h defined in (<ref>); it satisfies (<ref>) with I_h=(ν_h^-, ν_h^+) and ϕ_h'>0 in I_h. Also condition (<ref>) is true and ϕ_h ∈2(I_h, (ℓ^-_h, ℓ^+_h) by the regularity of D_h and f_h.
Now it remains to consider the boundary conditions of ϕ_h' at the extrema of I_h in the different cases. We have the following.
(i) Assume ℓ_h^-=0=D_h(0). We show that
ν_h^-= - ∫_ν^1E_h(s)/-G_h(s)+ s G_h(1) ds >-∞.
To prove (<ref>), notice that E_h(1)=D_h(0)=0 and that -G_h(s)+ sG_h(1) → 0 as s→ 1^-. In addition, by means of the strict concavity of f_h we obtain that
lim_s→1^-E'_h(s)/-G_h'(s)+ G_h(1)=E'_h(1)/-G'_h(1)+ G_h(1)=-ℓ_h^+ D_h'(0)/-f_h'(0)+
f_h(ℓ_h^+)/ℓ_h^+≥ 0
and then, by applying de l'Hospital Theorem we prove condition (<ref>). Moreover, by condition (<ref>), we get
lim_ξ↓ν_h^-ψ_h'(ξ)=-f_h'(0)-f_h(ℓ_h^+)/ℓ_h^+/ℓ_h^+ D_h'(0) if D_h'(0)>0,
-∞ if D_h'(0)=0.
By applying (<ref>) we conclude that ϕ_h(ξ)=ℓ_h^- for ξ≤ν_h^- and the estimates in (<ref>) are satisfied.
Furthermore, by the change of variables (<ref>), we obtain that
lim_ξ↓ν_h^-D_h(ϕ_h(ξ))ϕ_h^'(ξ)=lim_ξ↓ν_h^--ℓ_h^+E_h(ψ_h(ξ))ψ_h^'(ξ)
and hence, by (<ref>), we deduce (<ref>).
(ii) Assume 1-ℓ_h^+=0=D_h(1). With a similar reasoning as in (i) we prove that ν_h^+>-∞. In fact, E_h(0)=D_h(1)=0 and -s G_h(s)+s G_h(1) → 0 as s → 0^+. Moreover
lim_s→ 0^+E'_h(s)/-G_h'(s)+ G_h(1)=E'_h(0)/-G'_h(0)+ G_h(1)=( 1-ℓ_h^- ) D_h'(1)/f_h'(1)+
f_h(ℓ_h^-)/1-ℓ_h^-≥ 0
and again, by applying de l'Hospital Theorem we prove that ν_h^+>-∞. Moreover, by the estimate (<ref>), we have that
lim_ξ↑ν^hψ_h'(ξ)=f_h(ℓ_h^-)/1-ℓ_h^-+f_h'(1)/(ℓ_h^- -1) D_h'(1) if D_h'(1)<0,
-∞ if D_h'(1)=0.
By applying (<ref>) we conclude that ϕ_h(ξ)=ℓ_h^+ for ξ≥ν_h^+ and the estimates in (<ref>) are satisfied; by (<ref>) and (<ref>) we derive (<ref>).
(iii) In all the other cases it is easy to show that I_h=ℝ and again the slope condition (<ref>) can be obtained by the estimate (<ref>).
§ ACKNOWLEDGMENTS
The authors are members of the Gruppo Nazionale per l'Analisi Matematica, la Probabilità e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM). They were supported by the Project Macroscopic models of traffic flows: qualitative analysis and implementation, sponsored by the University of Modena and Reggio Emilia.
The first author was also supported by the project Balance Laws in the Modeling of Physical, Biological and Industrial Processes of GNAMPA.
abbrv
|
http://arxiv.org/abs/1701.07865v1 | 20170126201612 | Absorption Spectrum of a Two-Level System Subjected to a Periodic Pulse Sequence | [
"H. F. Fotso",
"V. V. Dobrovitski"
] | quant-ph | [
"quant-ph",
"physics.optics"
] |
Department of Physics and Astronomy, Iowa State University, Ames, Iowa 50011, USA
Department of Physics, University at Albany (SUNY), Albany, New York 12222, USA
Ames Laboratory US DOE, Ames, Iowa, 50011, USA
QuTech and Kavli Institute of Nanoscience, TU Delft, Lorentzweg 1, 2628 CJ Delft, the Netherlands
We investigate how the quantum control of a two-level system (TLS) coupled to photons can modify and tune the TLS's photon absorption spectrum. Tuning and controlling the emission and the absorption is of much interest e.g. for the development of efficient interfaces between stationary and flying qubits in modern architectures for quantum computation and quantum communication. We consider the periodic pulse control, where the TLS is subjected to a periodic sequence of the near-resonant Rabi driving pulses, each pulse implementing a 180^∘ rotation. For small inter-pulse delays, the absorption spectrum features a pronounced peak of stimulated emission at the pulse frequency, as well as equidistant satellite peaks with smaller spectral weights. As long as the detuning between the carrier frequency of the driving and the TLS transition frequency remains moderate, this spectral shape shows little change. Therefore, the quantum control allows shifting the absorption peak to a desired position, and locks the absorption peak to the carrier frequency of the driving pulses. Detailed description of the spectrum, and its evolution as a function time, the inter-pulse spacing and the detuning, is presented.
Absorption Spectrum of a Two-Level System Subjected to a Periodic Pulse Sequence.
V. V. Dobrovitski
December 30, 2023
=================================================================================
§ INTRODUCTION
An interface between stationary and flying qubits, that enables a long-range entanglement between different quantum network nodes, is essential for quantum information processing <cit.>. It is of particular importance for the solid state qubits, such as quantum dots or color centers <cit.>, which can be efficiently coupled to each other via photons and thus employed for quantum communications and distributed quantum information processing. However, the slow fluctuations in the environment of the solid-state qubits (e.g. the local strain and/or the local electric fields) constitute a lingering challenge, because they unpredictably shift the optical transition frequency of the qubits <cit.>. This slow drift of the transition frequency (spectral diffusion) makes it difficult to achieve the precise matching between the photons originating from different qubits that is required for efficient entanglement. To mitigate the spectral diffusion problem, various methods have been proposed and successfully used <cit.>, focusing primarily on the tuning of the emission spectrum and on improving the indistinguishability of the photons emitted from different qubits. In particular, it has been recently suggested <cit.> that the application of a periodic sequence of the optical control pulses to a quantum emitter (a two-level system coupled to the electromagnetic radiation bath) can re-direct most of the emission into a peak located at a preset target frequency (determined by the carrier frequency of the pulse driving field), and therefore greatly improve the indistinguishability of the photons coming from different emitters.
At the same time, there is a growing interest, accompanied by impressive progress <cit.> in the long-range entanglement schemes based on the photon absorption, and the theoretical developments which allow control and tuning of the absorption spectra have become timely and interesting. Correspondingly, a question arises whether the absorption-based entanglement can also be improved using the pulse control of the emitters, i.e. whether the absorption spectrum of a two-level system (TLS) coupled to the radiation bath can be modified and tuned by the control pulses. Besides, the studies of absorption of a TLS subjected to an external control are of fundamental interest due to the intimate connection between emission and absorption <cit.>. For instance, if the TLS is continuously driven by a strong coherent laser field then the TLS emission spectrum has an interesting three-peak structure, with two additional side peaks located at the frequencies ±Ω_R (where Ω_R is the laser Rabi driving frequency), and the absorption spectrum of the same system also acquires additional structure, displaying the regions of gain, corresponding to an amplification of the probing weak field instead of attenuation <cit.>.
The emission spectrum of the pulse-controlled TLS exhibits similarities with the continuously driven TLS emission: it has a central peak at the carrier frequency of the pulses ω_0, as well as the satellite peaks at ω_0 ±π/τ, ± 2π/τ, ⋯, where τ is the inter-pulse distance. Thus, it is reasonable to expect that absorption also can be controlled with the periodic pulses, and that the resulting absorption spectrum also has non-trivial features. In this work we study the absorption spectrum of a TLS driven by a periodic sequence of optical π-pulses, and examine its dependence on the pulse sequence period and the detuning of the emitter with respect to the pulse frequency (Fig.<ref>). We show that both expectations above are correct, and therefore the pulse control indeed can be a useful tool for controlling and tuning the absorption spectrum of a TLS. The absorption spectrum has a pronounced peak of stimulated emission at the carrier frequency of the pulses, and equidistant satellite peaks with smaller spectral weights. The qualitative features of this absorption spectrum do not change much as long as the detuning between the carrier frequency of the driving pulses and the TLS transition frequency remains moderate. Therefore, we show that the optical control enables creation of pairs of quantum nodes (one node working as an emitter and the other as an absorber) with precisely matching frequencies, and therefore greatly increased entanglement efficiency. This approach can also be used to improve the coupling of the emitters and the absorbers to the optical cavities, since the laser pulses can tune both the emission and the absorption lines of the respective quantum nodes, bringing them in the resonance with the respective cavities, and stabilizing both the emission and the absorption peaks at the desired location.
The rest of the paper is organized as follows. In Sec. <ref> we describe the model of the two-level system coupled to the photon bath and controlled by the pulses, the master equations governing the system dynamics, and the two methods, analytical and numerical, used for calculating the absorption spectrum. In Sec. <ref> we present analytical and numerical results demonstrating the control and tunability of the absorption spectrum. In Sec. <ref> we present conclusions.
§ MODEL OF THE TWO-LEVEL SYSTEM COUPLED TO THE ELECTROMAGNETIC RADIATION BATH
We model the quantum emitter as a TLS with the ground state |g⟩ and the excited state |e⟩, separated in energy by E_e - E_g = ħω_1; below we set ħ=1. Initially, at time t=0, the excited state is occupied and the ground state is empty. The TLS is coupled to a photon bath, and is periodically driven by pulses of the laser field with the Rabi frequency Ω. Within the rotating-wave approximation (RWA) <cit.>, in the reference frame rotating at frequency ω_0, the system in question is described by the Hamiltonian
H = ∑_kω_k a^†_ka_k + Δ/2σ_z - i ∑_k g_k( a^†_kσ_- - a_kσ_+ )
+ Ω_x(t)/2(σ_+ + σ_-),
where Δ=ω_1-ω_0 is the detuning of the TLS's transition frequency from the carrier frequency of the pulses; here we introduced the standard pseudo-spin Pauli operators for the TLS, namely σ_z = |e⟩⟨ e| - |g⟩⟨ g|, σ_+ = |e⟩⟨ g| and σ_- = |g⟩⟨ e| = (σ_+)^†. Furthermore, a^†_k and a_k are respectively the creation and the annihilation operator for a photon of mode k with the frequency ω_k, and g_k is the strength of coupling to the TLS. Note that in the rotating frame all frequencies are measured from the pulse carrier frequency ω_0, so that the zero frequency in the rotating frame corresponds to ω_0 in the lab frame; we take it as the target frequency for our TLS.
The time-dependent driving Ω_x(t) in Eq. (<ref>) represents the control pulses; here we consider the simple situation of the square-shaped pulses, with Ω_x(t)=Ω during the pulses and zero otherwise. In fact, below we assume that the pulses are almost instantaneous, i.e. that Ω is much larger than all other relevant energy scales, and that each pulse performs an almost instantaneous 180^∘ rotation of the TLS around the x-axis, interchanging |e⟩ and |g⟩; this assumption will be discussed further below. In the absence of control (Ω_x(t)≡ 0), the system exhibits spontaneous decay, and the corresponding emission rate is Γ = 2π∫ g_k^2 δ(ω_k - Δ) dk; we normalize our energy and time units so that Γ = 2, and the corresponding spontaneous emission line has a simple Lorentzian shape 1/(ω^2 + 1), with the half-width equal to 1. The absorption spectrum is defined here in a standard way, as the energy absorbed by the TLS from a weak probing field of frequency ω. The probing field is assumed to be weak enough that it does not significantly affect the population of each state <cit.>; our goal is to calculate the absorption as a function of frequency and time.
To understand the dynamics of the system, we analyze the time evolution of the density matrix of the emitter, which is written as
ρ(t) = ρ_ee(t) |e⟩⟨ e| + ρ_eg(t) |e⟩⟨ g|
+ ρ_ge(t) |g ⟩⟨ e| + ρ_gg(t) |g⟩⟨ g| ,
with ρ_ge^* = ρ_eg. For the TLS described by the above Hamiltonian (<ref>), within the Markovian approximation, the density matrix operator is governed by the master equations <cit.> in the rotating-wave approximation:
ρ̇_ee = i Ω_x(t)/2(ρ_eg - ρ_ge) - Γρ_ee ,
ρ̇_gg = -i Ω_x(t)/2(ρ_eg - ρ_ge) + Γρ_ee ,
ρ̇_ge = (iΔ -Γ/2) ρ_ge -i Ω_x(t)/2(ρ_ee - ρ_gg) ,
ρ̇_eg = (-iΔ -Γ/2) ρ_eg +i Ω_x(t)/2(ρ_ee - ρ_gg) .
Since the pulse driving is assumed to be strong and short (Ω≫Δ, Γ), the pulses can be considered as instantaneous. Each of them inverts the populations of the excited and ground state and swaps the values of ρ_eg and ρ_ge, i.e.
ρ(nτ + 0) = σ_x ρ(nτ - 0) σ_x
where ρ(nτ - 0) and ρ(nτ + 0) are the density matrices immediately before and after the pulse, correspondingly, with n being an integer and τ the period of the pulse sequence; in other words, the pulses interchange ρ_ee with ρ_gg, and ρ_eg with ρ_ge.
We want to determine the energy absorbed from a weak probing field by the TLS subjected to the periodic sequence of the π-pulses. Since the effect of the probing field is small, the absorption spectrum can be calculated within the linear response theory, so that at long times T the absorbed energy Q(ω) is given by <cit.>
Q(ω) = 2 A^2
× Re{∫_0^T dt ∫_0^T-t dθ ⟨[ σ_-(t) , σ_+(t+θ) ] ⟩e^-i ωθ},
where [O_1, O_2] is the commutator of the operators O_1 and O_2, and the angled brackets represent the expectation values evaluated in the absence of the probing field. σ_-(t) and σ_+(t+θ) are the time-dependent operators in the Heisenberg representation, and the expectation values are taken with respect to the initial state of the two-level system (in our case, fully occupied excited state and empty ground state). The constant A is independent of the pulse parameters, and does not affect the spectral shape, determining only the absolute scale of the absorption. The expression (<ref>) can be rewritten as
Q(ω) = 2 A^2 Re{𝒫_2(ω) - 𝒫_1(ω) }
= P_2(ω) - P_1(ω)
where
𝒫_2(ω) = ∫_0^T dt ∫_0^T-t dθ ⟨σ_-(t) σ_+(t+θ) ⟩e^-i ωθ
and
𝒫_1(ω) = ∫_0^T dt ∫_0^T-t dθ ⟨σ_+(t+θ) σ_-(t) ⟩e^-i ωθ
The term P_1(ω)=2 A^2 Re{𝒫_1(ω) } can be viewed as the direct emission of the two-level system and P_2(ω)=2 A^2 Re{𝒫_2(ω) } as the direct absorption so that the difference yields the net absorption <cit.>. We evaluate the terms P_1(ω) and P_2(ω) separately, and obtain the total absorption spectrum Q(ω) by taking the difference.
To evaluate the emission spectrum, it is convenient to re-express the two-time correlation function ⟨σ_+(t+θ) σ_-(t) ⟩ as a single-time expectation value <cit.>, according to
⟨σ_+(t+θ) σ_-(t) ⟩ =
=Tr[ ρ(0) U^-1(0,t+θ) σ_+ U(0,t+θ)U^-1(0,t) σ_- U(0,t)]
= Tr[ σ_- ρ(t) U^-1(t,t+θ) σ_+ U(t,t+θ) ]
= Tr[ ρ'(t,t + θ) σ_+ ]
where σ_+ and σ_- are the time-independent Pauli operators in the Schrödinger representation, and U(t_1,t_2) is the evolution operator of the emitter from time t_1 to time t_2, as determined by the master equations (<ref>). The calculations are simplified by introducing the matrix ρ'(t,s); its initial value at s=t is ρ'(t,t)=σ_-ρ(t), and its further evolution from s=t to s=t+θ is governed by the emitter's evolution operator U(t,t+θ), so that ρ'(t,t+θ)=U(t,t+θ)ρ'(t,t) U^-1(t,t+θ). In this way the evaluation of the two-time correlators becomes rather straightforward (although lengthy, see Appendix for details), and the function P_1(ω) can be obtained analytically and/or numerically. In order to calculate the function P_2(ω), we use the same procedure, simplifying the two-time correlation function as
⟨σ_-(t) σ_+(t+θ) ⟩ = Tr[ σ_+ ρ”(t,t + θ) ],
by introducing the matrix ρ”(t) = ρ(t)σ_-, whose time evolution is also governed by U(t,t+θ), i.e. ρ”(t,t+θ)=U(t,t+θ) ρ”(t,t) U^-1(t,t+θ). Note that ρ' and ρ” are not density matrices, and the symmetries of the proper density matrix ρ(t) (such as ρ_gg=1-ρ_ee and/or ρ_ge^* =ρ_eg) are not applicable to ρ' and ρ”.
In the absence of the pulses, the absorption spectrum has a Lorentzian-shaped profile centered at the emitter's frequency that equals to the detuning Δ (Fig.<ref>). In the presence of the pulses, we calculated the absorption spectrum both analytically and numerically by iteratively evolving the density matrix operator between successive pulses on a discrete time grid, using the equations of motion (<ref>), with the initial conditions ρ_ee = 1, ρ_eg=ρ_ge=ρ_gg=0, and then making use of (<ref>) and (<ref>) to calculate the two-time correlation functions.
§.§ Numerical solution
To find the solution numerically, we divide the time axis in the intervals of length τ (equal to the inter-pulse separation), and each interval between the pulses is further discretized into smaller steps of length Δ t. The goal is to find the two-time correlators ⟨σ_-(t) σ_+(t+θ) ⟩ and ⟨σ_-(t) σ_+(t+θ) ⟩ for each value of t and θ on this time grid, and use Fourier transform to find P_1(ω) and P_2(ω), whose difference gives the absorption spectrum Q(ω).
We start at t=0 with the known initial conditions for ρ(t), and use Eqs. <ref> to evolve all elements of the density matrix ρ(t) from time t to t+Δ t, and repeat this integration up to t = τ. Then the π-pulse is applied to the system, transforming the density matrix in accordance with Eq. (<ref>), and the iterative integration is resumed to propagate the density matrix from t=τ to t=2τ, until another pulse is applied at 2τ. The process is repeated until time T = N_pτ is reached, where N_p is the total number of pulses.
In this way we can obtain the elements of ρ'(t,t) and ρ”(t,t) for every t∈[0,N_pτ]. Then, for each time t we propagate the matrices ρ' and ρ” from time t to time t+θ by solving the master equations (<ref>); the values ρ'(t,t) and ρ”(t,t) serve as initial conditions. As a result, we obtain ρ'(t,t+θ) and ρ”(t,t+θ) for all values of θ∈ [0,T-t]. This procedure produces the two-time correlators ⟨σ_+(t+θ) σ_-(t)⟩ and ⟨σ_-(t) σ_+(t+θ)⟩, see Eqs. (<ref>) and (<ref>). Finally, Fourier transform with respect to θ and integration over t give us 𝒫_1(ω) and 𝒫_2(ω), thus determining the absorption spectrum Q(ω).
§.§ Analytical solution
The analytical solution for the density matrix evolution between the pulses can be obtained directly from Eqs. <ref>, and combined with the analytically calculated transformation of the density matrix by pulses as described by Eq. (<ref>), thus providing a fully analytical solution to the problem. The corresponding calculation is quite lengthy, and is presented in detail in the Appendix. In the limit of long T (i.e. large number of pulses N_p), the resulting expression for 𝒫_1(ω) is
𝒫_1(ω) = 1/(1 + e^-Γτ)γ_0[ ( 1-e^-Γτ/Γ - e^-γ_0 τe^γ_2 τ-1/γ_2
+ e^γ_2 τ-1/γ_2 1-e^-γ_0 τ/e^2 γ_1 τ-1)
(N_p + e^-Γτ/1 + e^-Γτ)
- e^γ_2 τ-1/γ_2 1-e^-γ_0 τ/e^2 γ_1 τ-1 (
2 e^-N_p γ_1 τ - 1 /e^-2 γ_1 τ - 1 + (e^-Γτ - e^ -2 Γτ)e^-N_p γ_1 τ/e^-2 γ_1 τ - e^-2 Γτ)
]
We have also performed the similar calculation for 𝒫_2(ω), expressing it as 𝒫_2(ω)=𝒫_3(ω)-𝒫_1(ω), and the resulting expression for 𝒫_3(ω) in the long-time limit is
𝒫_3 = N_pτ/γ_0 - N_p/γ_0^2(1 - e^-γ_0 τ)
+ e^γ_0 τ+e^-γ_0 τ-2/γ_0^2(e^2γ_1τ-1) [N_p - 2/1-e^-2γ_1τ]
where γ_0 = i(ω -Δ ) + Γ/2, γ_1 = iω + Γ/2, and γ_2 = i(ω -Δ ) - Γ/2.
§ RESULTS
Fig. <ref> shows the absorption spectrum obtained using both analytical and numerical approaches for a two-level system with Δ = 3.0 and a pulse sequence with τ = 0.2 after N_p=8 pulses. The panels <ref>(a) and <ref>(b) show P_1(ω) and P_2(ω), respectively, while the panel <ref>(c) shows the absorption spectrum obtained by taking their difference according to Eq. <ref>. In the presence of the pulse control we see the main peak in the absorption spectrum at the carrier frequency of the pulses (ω = 0 in the rotating frame), and the satellite peaks at the multiples of ±π/τ. A good agreement is clearly seen, despite the large number of pulse limit used in the analytical result and the numerical Fourier transform of the finite time step data in the numerical result. The agreement is further improved by considering the spectrum at longer times, see Appendix. These results provide clear validation of the tools used in these studies. Moreover, note that the absorption spectrum, as given by Eq.(<ref>), is the difference of two terms of comparable magnitude in a broad frequency range. As a result, it is influenced by numerical errors, but the small discrepancy between the analytical and the numerical results shows that this kind of errors is not critical. Thus, the numerical solution can be used in the future studies of more complex driving protocols, which may not be amenable to an analytical solution.
Fig.<ref> shows the analytic results for the time evolution of the absorption spectrum for Δ = 3.0 and τ = 0.2. The snapshots of the spectrum are presented after N_p=8 (black), N_p=12 (red), N_p=16 (green), and N_p=20 (blue) pulses. The absorption spectra feature a positive part and a negative part. The latter corresponds to the stimulated emission and the former to the ”true absorption” <cit.>. The central peak corresponds to the stimulated emission at the pulse frequency and satellite peaks at multiples of ±π/τ with amplitudes that decrease away from the central frequency and are strongly suppressed at large frequencies. The lineshape is established early, and the amplitude of the peaks increases with time.
In Fig.<ref> we present the dependence of the absorption spectrum for Δ = 3.0 on the period τ of the pulse sequence. The absorption spectrum is shown after 8 pulses for τ = 0.2, τ = 0.3, τ = 0.4, and τ = 0.5. The satellite peaks move closer to the central peak and their relative amplitude increases as τ becomes longer. Also note the increase in the positive fraction of spectral weight with increasing τ.
It is also interesting to study the dependence of the absorption spectrum on the detuning Δ. The corresponding results are shown in Fig.<ref> which presents the spectrum under a pulse sequence of period τ = 0.2 for the detuning values of Δ = 3.0(black), Δ = 4.0(red), Δ = 5.0(green), and Δ = 6.0(blue) after 8 pulses. The lineshape remains almost the same for all four values of the detuning parameter when τ is kept constant. In fact, we observe that the lineshape of the absorption spectrum shows little dependence on Δ as long as Δ·τ≲ 1. The figure shows that the fraction of the spectral weight contained in the positive-frequency satellites (with ω>ω_0) slightly increases with Δ, while the spectral weight of the negative-frequency satellites (ω<ω_0) correspondingly decreases.
§ CONCLUSIONS
We have studied the absorption spectrum of a two-level system driven by a periodic sequence of π-pulses. This absorption spectrum is determined by the energy absorbed from a probing field weak enough to not significantly affect the population of the excited and ground state. We have solved the problem by integrating the master equation analytically and numerically and obtained from both methods results that are in excellent agreement. Our results show that for moderate values of Δ·τ, the absorption spectrum has a lineshape with little dependence on Δ. It has a pronounced peak of stimulated emission at a the pulse frequency along with satellite peaks at multiples of ±π/τ away from this frequency. The weights of these satellite peaks are strongly suppressed away from the central peak.
By using the optical control considered in this work (with, possibly, more complex pulse protocols), it is possible to create pairs of quantum nodes, with one node working as an emitter and the other as an absorber, with precisely matching frequencies, and therefore greatly increased entanglement efficiency. In a similar manner, one can use it to improve the coupling of the emitters and absorbers to the optical cavities, using the laser pulses to tune both the emission and the absorption lines of the respective quantum nodes, bringing them in the resonance with the respective cavities, and stabilizing both the emission and the absorption peaks at the desired location.
§ ACKNOWLEDGMENTS
We thank D. D. Awschalom and M. E. Flatté for helpful discussions. Work at the Ames Laboratory was supported by the US Department of Energy, Office of Science, Basic Energy Sciences, Division of Materials Sciences and Engineering. The Ames Laboratory is operated for the US Department of Energy by Iowa State University under Contract No. DE-AC02-07CH11358. This work was partially supported by AFOSR MURI program.
99
JeffKimble_qtmInternet H. J. Kimble, Nature 453, 1023 (2008).
Imamoglu_Awschalom_QDOT_PRL_99 A. Imamoǧlu, D. D. Awschalom, G. Burkard, D. P. DiVincenzo, D. Loss, M. Sherwin and A. Small, Phys. Rev. Lett. 83, 4204 (1999).
ChildressRepeater L. Childress, J. M. Taylor, A. S. Sørensen, and M. D. Lukin, Phys. Rev. Lett. 96, 070504 (2006).
HansonAwschalom_QIP_ss_08 R. Hanson and D. D. Awschalom, Nature 453 1043 (2008).
Bernien_Hanson_Nature2013 H. Bernien, B. Hensen, W. Pfaff, G. Koolstra, M. S. Blok, L. Robledo, T. H. Taminiau, M. Markham, D. J. Twitchen,
L. Childress and R. Hanson, Nature 497, 86 (2013).
Pfaff_Hanson_Science2014 W. Pfaff, B. J. Hensen, H. Bernien, S. B. van Dam, M. S. Blok, T. H. Taminiau, M. J. Tiggelman, R. N. Schouten,
M. Markham, D. J. Twitchen, R. Hanson, Science 345 6196, 532 (2014).
Gao_Imamoglu_NatComm2013 W. B. Gao, P. Fallahi, E. Togan, A. Delteil, Y.S. Chin, J. Miguel-Sanchez and A. Imamoglu, Nature Com. 4:2744 (2013) .
Hanson_loopholeFree_Nature2015 B. Hensen, H. Bernien, A. E. Dréau, A. Reiserer, N. Kalb, M. S. Blok, J. Ruitenberg, R. F. L. Vermeulen, R. N. Schouten, C. Abellán, W. Amaya, V. Pruneri, M. W. Mitchell, M. Markham, D. J. Twitchen, D. Elkouss, S. Wehner, T. H. Taminiau and R. Hanson, Nature 526, 682 (2015).
Basset_Awschalom_PRL2011 L. C. Basset, F. J. Heremans, C. G. Yale, B. B. Buckley, and D. D. Awschalom, Phys. Rev. Lett. 107, 266403 (2011).
FaraonEtalNatPhot A. Faraon, P. E. Barclay, C. Santori, K.-M. C. Fu, and R. G. Beausoleil, Nature Photonics 5, 301 (2011)
NV_Review_PhysRep2013 M. W. Doherty, N. B. Manson, P. Delaney, F. Jelezko, J. Wrachtrup, L. C. L. Hollenberg, Physics Reports 528, 1 (2013).
Sipahigil_SiV A. Sipahigil, K. D. Jahnke, L. J. Rogers, T. Teraji, J. Isoya, A. S. Zibrov, F. Jelezko, and M. D. Lukin,
Phys. Rev. Lett. 113, 113602 (2014).
Rogers_SiV L. J. Rogers, K. D. Jahnke, M. H. Metsch, A. Sipahigil, J. M. Binder, T. Teraji, H. Sumiya, J. Isoya, M. D. Lukin, P. Hemmer, and F. Jelezko,
Phys. Rev. Lett. 113, 263602 (2014).
SantoriVuckovicYamamoto C. Santori, D. Fattal, J. Vučković, G. S. Solomon, and Y. Yamamoto, Nature 419, 594 (2002).
CarterGammonQDcavity S. G. Carter, T. M. Sweeney, M. Kim, C. S. Kim, D. Solenov, S. E. Economou, T. L. Reinecke, L. Yang, A. S. Bracker, and D. Gammon, Nat. Photonics 7, 329 (2013).
Fu_Beausoleil_PRL2009 K.-M. C. Fu, C. Santori, P. E. Barclay, L. J. Rogers, N. B. Manson, and R. G. Beausoleil, Phys. Rev. Lett. 103, 256404 (2009).
AmbroseMoerner_spectralDiffusion_Nature1991 W. P. Ambrose and W. E. Moerner, Nature 349, 225 (1991).
Hansom_Atature_APL2014 J. Hansom, C. H. H. Schulte, C. Matthiesen, M. J. Stanley, and M. Atatüre, Appl. Phys. Lett. 105, 172107 (2014).
Acosta_Beausoleil_PRL2012 V. M. Acosta, C. Santori, A. Faraon, Z. Huang, K.-M. C. Fu, A. Stacey, D. A. Simpson, K. Ganesan, S. Tomljenovic-Hanic, A. D. Greentree, S. Prawer, and R. G. Beausoleil, Phys. Rev. Lett. 108, 206401 (2012).
Kuhlmann_Warburton_NatPhys2013 A. V. Kuhlmann, J. Houel, A. Ludwig, L. Greuter, D. Reuter, A. D. Wieck, M. Poggio, and R. J. Warburton, Nat. Phys. 9, 570 (2013).
Crooker_Bayer_PRL2010 S. A. Crooker, J. Brandt, C. Sandfort, A. Greilich, D. R. Yakovlev, D. Reuter, A. D. Wieck, and M. Bayer, Phys. Rev. Lett. 104, 036601 (2010).
Matthiesen_Atature_SciRep2014 C. Matthiesen, M. J. Stanley, M. Hugues, E. Clarke, and M. Atatüre, Sci. Rep. 4, 4911 (2014).
FotsoEtal_PRL2016 H. F. Fotso, A. E. Feiguin, D. D. Awschalom and V. V. Dobrovitski, Phys. Rev. Lett. 116, 033603 (2016).
LenhardEtal_PRA2015 A. Lenhard, M. Bock, C. Becher, S. Kucera, J. Brito, P. Eich, P. Müller, and J. Eschner, Phys. Rev. A 92, 063827 (2015).
TrautmannAlber_PRA2015 N. Trautmann and G. Alber, Phys. Rev. A 93, 053807 (2016).
YangWrachtrupEtal_NatPhot2016 S. Yang, Y. Wang, D. D. Bhaktavatsala Rao, T. H. Tran, A. S. Momenzadeh, M. Markham, D. J. Twitchen, P. Wang, W. Yang, R. Stöhr, P. Neumann, H. Kosaka, and J. Wrachtrup, Nat. Photonics 10, 507 (2016).
einstein_phys_Z_1917 A. Einstein, Phys. Z. 18, 121 (1917).
Mollow_PhysRevA_5_1972 B. R. Mollow, Phys. Rev. A 3, 2217 (1972).
Wu_et_al_Mollow_PRL1977 F. Y. Wu, S. Ezekiel, M. Ducloy and B. R. Mollow, Phys. Rev. Lett. 38, 1077 (1977).
RF_Mollow_PhysRev1969 B. R. Mollow, Phys. Rev. 188, 1969 (1969).
Autler_Townes_PhysRev1955 S. H. Autler and C. H. Townes, Phys. Rev. 100, 703 (1955).
KnightMilonni_PhysRep1980 P. L. Knight and P. W. Milonni, Phys. Rep. 66, 21 (1980).
RF_Heitler_Book1960 W. Heitler, The quantum Theory of Radiation (Oxford University Press, London, 1960, Third Ed.).
Cohen_Tannoudji_Book1992 C. Cohen-Tannoudji, J. Dupont-Roc, and G. Grynberg, Atom-Photon Interactions, Basic Processes and Applications (John Wiley & Sons, Inc., New York, 1992).
Mollow_PhysRevA_3_1972 B.R. Mollow, Phys. Rev. A 5, 1522 (1972).
BloomMargenau_PhysRev1953 S. Bloom and H. Margenau, Phys. Rev. 90, 791 (1953).
Scully_Zubairy_book1997 M.O. Scully and M.S. Zubairy, Quantum Optics (Cambridge University Press, New York 1997).
Loudon_book1983 R. Loudon The Quantum Theory of Light (Clarendon Press, Oxford, 1983).
§ APPENDIX
Here we present the details of the analytical calculation of the absorption spectrum for a two-level system subjected to a periodic sequence of control pulses. As shown in Refs. Mollow_PhysRevA_5_1972,BloomMargenau_PhysRev1953, the absorption spectrum can be determined from the two-time correlation functions of the TLS:
Q(ω) = 2A^2 ×
Re{∫_0^T dt ∫_0^T-t dθ ⟨[ σ_-(t) , σ_+(t+θ) ] ⟩e^-i ωθ},
where [ , ] is the commutator of the two enclosed operators, and the angled brackets represent the expectation values evaluated in the absence of the probing field. A is a proportionality constant. This can be rewritten as:
Q(ω) = 2 A^2 Re{𝒫_2(ω) - 𝒫_1(ω) }
= P_2(ω) - P_1(ω),
with
𝒫_2(ω) = ∫_0^T dt ∫_0^T-t dθ ⟨σ_-(t) σ_+(t+θ) ⟩e^-i ωθ
and
𝒫_1(ω) = ∫_0^T dt ∫_0^T-t dθ ⟨σ_+(t+θ) σ_-(t) ⟩e^-i ωθ.
The terms P_1(ω)=2 A^2 Re{𝒫_1(ω) } and P_2(ω)=2 A^2 Re{𝒫_2(ω) } can be evaluated separately and the absorption spectrum obtained by taking the difference. To find 𝒫_2(ω), we express the correlation function as
⟨σ_-(t) σ_+(t+θ) ⟩ = Tr[ ρ(0) U^-1(0, t) σ_- U(0, t) U^-1(0, t+θ) σ_+ U(0, t+θ)]
= Tr[ σ_+ U(t, t+θ) U(0, t) ρ(0) U^-1(0, t) σ_- U(0, t)U^-1(0, t) U^†(t, t+θ) ]
= Tr[ σ_+ U(t, t+θ) ρ(t)σ_- U^-1(t, t+θ)]
= Tr[ σ_+ U(t, t+θ) ρ”(t,t) U^-1(t, t+θ)]
= Tr[ σ_+ ρ”(t,t+θ)]
where σ_+ and σ_- are the Pauli operators, and U(t_1,t_2) is the operator of the emitter's evolution from t_1 to t_2, as determined by the master equations (<ref>). The subsequent calculations are facilitated by introducing the matrix ρ”(t,s); its initial value at s=t is defined as ρ”(t,t)=ρ(t)σ_-, and its further evolution from s=t to s=t+θ is governed by the emitter's evolution operator U(t,t+θ), so that ρ”(t,t+θ)=U(t,t+θ)ρ”(t,t) U^-1(t,t+θ).
It is informative to write ρ”(t,s) explicitly as
ρ”(t,s) = [ ρ”_ee(t,s) ρ”_eg(t,s); ρ”_ge(t,s) ρ”_gg(t,s) ]
so that
σ_+ ρ^''(t + θ) = [ ρ”_ge(t,t+ θ) ρ^''_gg(t,t+ θ); 0 0 ],
and the corresponding two-time correlation function is obtained directly as
⟨σ_-(t)σ_+(t+θ)⟩ = Tr[ σ_+ ρ”(t,t+θ)]
= ρ”_ge(t,t+ θ).
The initial condition for ρ”, corresponding to s=t, has a form
ρ”(t,t) = ρ(t)σ_- = [ ρ_eg(t) 0; ρ_gg(t) 0 ],
being determined by the elements of the “true” density matrix ρ_eg(t)=⟨ e|ρ(t)|g⟩ and ρ_gg(t)=⟨ g|ρ(t)|g⟩, see Eq. <ref>.
Similarly, for ρ'(t,s) the initial condition at s=t are
ρ'_ee(t,t) = ρ'_eg(t,t)=0,
ρ'_gg(t,t) = ρ_eg(t), ρ'_ge(t,t)=ρ_ee(t),
and the corresponding two-time correlator is
⟨σ_+(t+θ)σ_-(t)⟩ = Tr[ ρ'(t,t+θ) σ_+ ]
= ρ'_ge(t,t+θ).
Therefore, our task is reduced to to determining ρ”_ge(t,t+θ) and ρ'_ge(t,t+θ).
The master equations characterizing the time evolution of the TLS density matrix are given by Eqs. <ref> and Eq. <ref>; the time development of the matrices ρ”(t,s) and ρ'(t,s) also obeys these equations of motion as s increases from t to t+θ. Specifically, when s corresponds to the time interval between the pulses, we have
d/ds ρ”_ee(t,s) = -Γ ρ”_ee(t,s)
d/ds ρ”_gg(t,s) = Γ ρ”_ee(t,s)
d/ds ρ”_ge(t,s) = (iΔ - Γ/2) ρ”_ge(t,s)
d/ds ρ”_eg(t,s) = (-iΔ - Γ/2) ρ”_eg(t,s)
for any value of the parameter t; the same equations govern the dynamics of ρ'. The effect of the pulses on ρ” and ρ' is also easily derived from Eq. <ref>: when t+s coincides with the time of the pulse application, i.e. when s=nτ for some integer n, the matrix transforms as
ρ”(t,nτ + 0) = σ_x ρ”(t,nτ - 0) σ_x
where ρ”(t,nτ-0) and ρ”(t,nτ+0) are the matrices immediately before and after the pulse, correspondingly; in other words, each pulse interchanges ρ”_ee with ρ”_gg, and ρ”_eg with ρ”_ge; the transformation of ρ' is the same.
Let us start with establishing the initial condition for ρ”(t,s) at s=t, which is determined by ρ_gg(t) and ρ_eg(t), see Eq. <ref>. First, we note that ρ_eg(t)≡ 0. Indeed, the initial condition at t=0 for the density matrix ρ are
ρ_ee(0) = 1, ρ_gg(0) = ρ_ge(0) = ρ_eg(0) = 0.
As the master equations (<ref>) show, both quantities ρ_eg and ρ_ge remain zero before the first pulse (when Ω_x(t)≡ 0). The effect of the pulse is to interchange these two values, i.e. they both remain zero after the pulse. The same considerations can be applied for the second, third, etc. pulse, showing that ρ_eg(t)=ρ_ge(t)=0 for all t.
Thus, to determine ρ”(t,t) we only need to find ρ_gg(t). We assume that the time instant t is between the M-th and the (M+1)-th pulse, i.e. t = M τ + (τ - τ_1) for some τ_1∈ [0,τ], as shown in Fig. <ref>. Immediately before the first pulse, at the time moment τ-0, we have:
ρ_ee(τ) = e^-Γτ
ρ_gg(τ) = 1- e^-Γτ;
then at time 2τ -0 we have:
ρ_ee(2τ) = (1 - e^-Γτ) e^-Γτ
ρ_gg(2τ) = 1- e^-Γτ +e^-2 Γτ,
at time 3τ -0:
ρ_ee(3τ) = e^-Γτ- e^-2Γτ +e^-3 Γτ
ρ_gg(3τ) = 1- e^-Γτ +e^-2 Γτ-e^-3 Γτ,
⋮
so that eventually, right before the M-th pulse, at time Mτ-0
ρ_ee(Mτ-0) = ∑_k=1^M (-1)^k-1e^- kΓτ = -∑_k=1^M (-1)^ke^- kΓτ ,
and right after the M-th pulse, which interchanges ρ_ee and ρ_gg,
ρ_ee(Mτ+0) = 1- ρ_ee(Mτ-0) = 1 + ∑_k=1^M (-1)^ke^- kΓτ.
Thus, at the time instant t=Mτ+(τ-τ_1), we have
ρ_ee(t) = ( 1+ ∑_k=1^M (-1)^ke^- kΓτ) e^-Γ (τ -τ_1)
= ( ∑_k=0^M (-1)^ke^- kΓτ) e^-Γ (τ -τ_1)
= 1-(-1)^M+1e^- (M+1) Γτ/1 + e^ - Γτe^-Γ (τ -τ_1),
and, since ρ_gg(t)=1-ρ_ee(t), we obtain
ρ_gg(t) = 1-1-(-1)^M+1e^- (M+1) Γτ/1 + e^ - Γτe^-Γ (τ -τ_1).
Having established the explicit initial value of ρ”(t,t), now we can proceed evaluating the value of ρ”_ge(t,t+θ). Between the pulses both ρ^''_ge and ρ^''_eg evolve according to Eqs. <ref> and <ref>.
Thus, if t and t+θ belong to the same inter-pulse interval (i.e. when Mτ <t+θ < (M+1)τ), we have
ρ^''_ge(t,t+θ) = e^(iΔ - Γ/2)θρ_gg(t) and ρ^''_eg = 0.
With increasing θ, at some point it will become equal to τ_1, and then the instant t+θ will coincide with the instant when a pulse is applied: t+θ=(M+1)τ+0. At this point the time instants t and t+θ will become separated by one pulse, and the value of ρ”_eg will be interchanged with ρ”_ge, i.e. when θ=τ_1+0 we will have already
ρ^''_ge(t,t+θ) = 0
ρ^''_eg(t,t+θ) = e^(iΔ - Γ/2)τ_1ρ_gg(t) .
At this point, the accumulation rate of the phase in ρ”_ge and ρ”_eg changes sign: note that the factors on the right-hand sides of Eqs. <ref> and <ref> have opposite imaginary parts, iΔ and -iΔ, respectively. Thus, right before the next pulse, when t+θ=(M+2)τ-0 (i.e. when θ=τ+τ_1-0), we will have
ρ^''_ge(t,t+θ) = 0
ρ^''_eg(t,t+θ) = e^(iΔ - Γ/2)τ_1e^(-iΔ - Γ/2)τρ_gg(t)
and right after the pulse, when t+θ=(M+2)τ + 0 (i.e. when θ=τ+τ_1 + 0), the values will be interchanged again:
ρ^''_ge(t,t+θ) = e^(iΔ - Γ/2)τ_1e^(-iΔ - Γ/2)τρ_gg(t)
ρ^''_eg(t,t+θ) = 0.
Proceeding further in this way, right before the next pulse, at θ=τ_1+2τ - 0, we get
ρ^''_ge(t,t+θ) = e^(iΔ - Γ/2)τ_1e^- Γτρ_gg(t)
ρ^''_eg(t,t+θ) = 0.
Note that the phase of ρ”_ge still equals to iΔτ_1, because after each pulse the phase accumulation rate changes sign. Further, at θ=τ_1+3τ - 0,
ρ^''_ge(t,t+θ) = 0
ρ^''_eg(t,t+θ) = e^(iΔ - Γ/2)τ_1 -iΔτ - 3Γ/2τρ_gg(t)
Thus we obtain that for θ=τ_1+(m-1)τ - 0 with m even, as shown in Fig. <ref>,
ρ^''_ge(t,t+θ) = 0
ρ^''_eg(t,t+θ) = e^(iΔ - Γ/2)τ_1 -iΔτ - (m-1)Γ/2τρ_gg(t),
and for θ=τ_1+(m-1)τ + τ_2 with m even and τ_2 < τ,
ρ^''_ge(t,t+θ) = e^(iΔ - Γ/2)τ_1 -iΔτ - (m-1)Γ/2τe^iΔτ_2 -Γ/2τ_2 ρ_gg(t)
= e^-Γθ /2e^iΔ ( τ_1 + τ_2 - τ)ρ_gg(t)
ρ^''_eg(t,t+θ) = 0.
Altogether we can write
ρ^''_ge(t,t+θ) = f(t,θ) ρ_gg(t)
with ρ_gg(t) given above by Eq. <ref>, and
* for t and t+θ in the same pulse interval,
f(t, θ) = e^(i Δ - Γ/2)θ
* for t and t+θ separated by an odd number of pulses,
f(t, θ) = 0
* for t and t+θ separated by an even number m of pulses, i.e. when θ=τ_1+(m-1)τ+τ_2 with even m and τ_2<τ (Fig. <ref>),
f(t, θ) = e^-Γθ /2e^iΔ ( τ_1 + τ_2 - τ)=e^-Γθ /2e^iΔ (θ - mτ).
Note that, due to the pulses, the phase of the function f(t,θ) does not grow linearly with θ, being confined to the interval [-τΔ,τΔ] at all values of t and θ. This is the reason why, for small inter-pulse delay τ≪Δ^-1, both emission and absorption are concentrated in the vicinity of ω=0 instead of ω=Δ.
With this result, we can now rewrite the direct absorption integral 𝒫_2(ω)
in the form
𝒫_2(ω) = ∫_0^T dt ρ_gg(t) ∫_0^T-t dθ f(t, θ) e^-i ωθ
First let us evaluate the inner integral, that we will denote as I_θ, using the explicit form of f(t,θ) above:
I_θ = ∫_0^τ_1 dθ e^-i ωθe^(iΔ-Γ/2)θ + ∫_τ_1+τ^τ_1 + 2τ dθ e^-i ωθe^-i 2 Δτ + ( i Δ-Γ/2) θ
+ ∫_τ_1+3τ^τ_1 + 4τ dθ e^-i ωθe^-i 4 Δτ + ( i Δ-Γ/2) θ + ⋯ +
∫_τ_1+(m-1)τ even m^τ_1 + mτ dθ e^-i ωθe^-imΔτ + (i Δ-Γ/2) θ + ⋯
= e^[i(Δ -ω) - Γ/2]τ_1 -1/ i(Δ -ω) - Γ/2 + ∑_m=2 even m^m_max∫_τ_1 + (m-1)τ^τ_1 + mτ dθ e^-i ωθe^[ -i mΔτ + (i Δ-Γ/2) θ]
where the summation is over even values of m, and m_max is the maximum value of m; since it has to be even, its specific value depends on whether M is odd or even, see below for details.
Defining γ_0 = i(ω -Δ ) + Γ/2, we can write
I_θ = 1 - e^-γ_0 τ_1/γ_0 + ∑_m=2 even m^m_maxe^ -imΔτe^-γ_0(τ_1+(m-1)τ)-e^-γ_0(τ_1+mτ)/γ_0
= 1-e^-γ_0τ_1/γ_0 + e^-γ_0τ_1/γ_0(e^γ_0 τ-1) 1-e^-m_maxγ_1τ/e^2γ_1 τ-1 ,
where we have introduced γ_1 = Γ/2 + iω. Note that this result correctly reproduces the situation of m_max<2, i.e. when m_max=0; this happens when t belongs to the last inter-pulse interval of the sequence, and θ varies only from zero to τ_1. Then the value of I_θ is given by the first integral in Eq. <ref>, while the remaining sum over m is zero. Thus, we do not need to worry about this special case in the calculations below.
Now we need to evaluate the outer integral:
𝒫_2(ω) = ∫_0^T dt ρ_gg(t) I_θ
with the quantity ρ_gg calculated earlier,
ρ_gg(t) = 1 - ρ_0(M) e^-Γ (τ -τ_1)
= 1 - 1- (-1)^M+1e^-(M+1)Γτ/ 1 + e^-Γτe^-Γ (τ -τ_1),
where we introduced the shorthand notation ρ_0(M) for the awkward fraction appearing on the second line. In this way, we represent 𝒫_2 as
𝒫_2 = ∫_0^T dt I_θ - ∫_0^τ dt ρ_0(0) e^-Γ(τ-τ_1) I_θ
- ∫_τ^2τ dt ρ_0(1) e^-Γ(τ-τ_1) I_θ
- ⋯ - ∫_(N_p-1)τ^N_pτ dt ρ_0(N_p-1) e^-Γ(τ-τ_1) I_θ
= ∫_0^T dt I_θ
- ∑_M=0^N_p-1 ρ_0(M) ∫_0^τ dt_1 e^-Γ t_1(1/γ_0 + e^-γ_0 τ_1 I_θ^(1))
where we have defined t_1 =t-Mτ= τ - τ_1 and
I_θ = 1/γ_0 + e^-γ_0 τ_1 I_θ^(1)
I_θ^(1) = -1/γ_0 + e^γ_0 τ -1/γ_0 1-e^-m_maxγ_1τ/e^2γ_1τ-1
= -1/γ_0 + I_θ^(2)
where we introduced the shorthand notation I_θ^(2) for another awkward fraction, the second summand on the second line above.
Now we have
𝒫_2 = - ∑_M=0^N_p-1ρ_0(M) ∫_0^τ dt_1 e^-Γ t_1(1/γ_0 + e^-γ_0 (τ-t_1) I_θ^(1))
+ ∫_0^T dt I_θ
= - ∑_M=0^N_p-1ρ_0(M) [1 - e^-Γτ/γ_0 Γ + I_θ^(1)e^-γ_0 τe^γ_2 τ -1/γ_2]
+ 𝒫_3
with 𝒫_3 = ∫_0^T dt I_θ and γ_2 = γ_0 - Γ = i(ω - Δ) - Γ/2.
Below we will show that the first sum gives exactly the contribution from the stimulated emission 𝒫_1(ω). It is convenient to calculate the simpler term 𝒫_3 first. We can rewrite 𝒫_3 as:
𝒫_3 = ∫_0^T dt/γ_0 + ∫_0^τ dt e^-γ_0 τ_1 I_θ^(1)
+ ⋯ + ∫_(N_p-1)τ^N_pτ dt e^-γ_0 τ_1 I_θ^(1)
= N_pτ/γ_0 + e^-γ_0 τ∑_M=0^N_p-1 I_θ^(1)∫_0^τ dt_1 e^γ_0 t_1
= N_pτ/γ_0 + ∑_M=0^N_p-1 I_θ^(1)1- e^-γ_0 τ/γ_0
With the explicit form of I_θ^(1) given above, we have
𝒫_3
= N_pτ/γ_0 - ∑_M=0^N_p-11 - e^-γ_0 τ/γ_0^2 + ∑_M=0^N_p-11 - e^-γ_0 τ/γ_0^2 I_θ^(2)
= N_pτ/γ_0 - N_p/γ_0^2(1 - e^-γ_0 τ) + 𝒫_4
where
𝒫_4 = 1-e^-γ_0τ/γ_0 e^γ_0τ-1/γ_0∑_M=0^N_p-11-e^-γ_1τ m_max/e^2γ_1 τ-1
= e^γ_0 τ+e^-γ_0 τ-2/γ_0^2 (e^2γ_1τ-1)[ N_p -∑_M=0^N_p-1e^-γ_1τ m_max]
In order to calculate the last sum in the equation above, we need to determine m_max. To do this let us consider the case of N_p=2K, i.e. when the number K of the full cycles of the sequence has been applied to the TLS. Let us recall that we represent t=Mτ+(τ-τ_1), i.e. M is the number of pulses between zero and t. The number of pulses separating t and t+θ is m, and the maximum value of θ is θ_max=T-t, which limits the maximum value of m; however, f(t,θ) is zero if m is odd, so that m_max should be even.
Therefore, starting from larger values of t, we obtain:
* if t∈[T-τ,T] then θ_max=(T-t) ∈[0,τ], so that if M=2K-1 then m_max=0,
* if t∈[T-2τ,T-τ] then θ_max=(T-t) ∈[τ,2τ], so that if M=2K-2 then m_max=0 because m_max should be even,
* if t∈[T-3τ,T-2τ] then θ_max=(T-t) ∈[2τ,3τ], so that if M=2K-3 then m_max=2,
* if t∈[T-4τ,T-3τ] then θ_max=(T-t) ∈[3τ,4τ], so that if M=2K-4 then m_max=2 (should be even),
* if t∈[T-5τ,T-4τ] then θ_max=(T-t) ∈[4τ,5τ], so that if M=2K-5 then m_max=4,
* if t∈[T-6τ,T-5τ] then θ_max=(T-t) ∈[5τ,6τ], so that if M=2K-6 then m_max=4 (should be even),
* ⋯
* if t∈[τ,2τ] then θ_max=(T-t) ∈[T-2τ,T-τ], so that if M=1 then m_max=2K-2,
* if t∈[0,τ] then θ_max=(T-t) ∈[T-τ,T], so that if M=0 then m_max=2K-2 (should be even).
To summarize, if we parametrize M=2n for even M and M=2n+1 for odd M, where n varies from 0 to K-1, then m_max=2(K-n-1) for both M=2n and M=2n+1.
Thus, the last sum in Eq. <ref> is calculated as
∑_M=0^N_p-1e^-γ_1τ m_max = 2∑_n=0^K-1e^-2γ_1τ (K-n-1)
= 21-e^-γ_1τ N_p/1-e^-2γ_1τ,
where the factor 2 appears because m_max is the same for both M=2n and M=2n+1, so that the sums over odd M and even M are combined. Putting all terms together, we obtain
𝒫_3 = N_pτ/γ_0 - N_p/γ_0^2(1 - e^-γ_0 τ)
+ e^γ_0 τ+e^-γ_0 τ-2/γ_0^2(e^2γ_1τ-1) [N_p - 21-e^-γ_1τ N_p/1-e^-2γ_1τ]
Now, the calculation of the emission term 𝒫_1 can be simplified if we notice that ρ'(t,s) obeys the same equations of motion as ρ”(t,s), and is transformed by the pulses in exactly the same way. Therefore, the quantity ρ'_ge(t,s) (that determines 𝒫_1) evolves in exactly the same way as ρ”_ge(t,s), and the difference between them is only in the initial condition: at s=t we have ρ'_ge(t,t)=ρ_ee(t), while ρ”_ge(t,t)=ρ_gg(t)=1-ρ_ee(t). Thus, the reasoning that was used in deriving
Eqs. <ref>–<ref> can be directly applied to ρ'_ge(t,s) if ρ_gg(t) is substituted by ρ_ee(t), due to the linearity of the master equations. As a result, we immediately see that ρ'_ge(t,t+θ) has the form
ρ'_ge(t,t+θ) = f(t,θ) ρ_ee(t)
with the same function f(t,θ). Thus, the integral I_θ can be used without modifications in the calculation of 𝒫_1, and, since ρ_gg(t)=1-ρ_ee(t), we immediately obtain
𝒫_1(ω) = ∫_0^T ρ_ee(t) I_θ dt = ∫_0^T [1-ρ_gg(t)] I_θ dt = 𝒫_3 - 𝒫_2.
Comparing this expression with Eq. <ref> above, we obtain an explicit expression
𝒫_1 = ∑_M=0^N_p-1ρ_0(M) ∫_0^τ dt_1 e^-Γ t_1(1/γ_0 + e^-γ_0 (τ-t_1) I_θ^(1))
= ∑_M=0^N_p-1ρ_0(M) [1 - e^-Γτ/γ_0 Γ + I_θ^(1)e^-γ_0 τe^γ_2 τ -1/γ_2]
Now let us evaluate the sums appearing in this expression. First, we need the sum
∑_M=0^N_p-1 ρ_0(M) = 1/1 + e^-Γτ∑_M=0^N_p-1[ 1+e^-Γτ (-e^-Γτ)^M ]
= N_p/1+e^-Γτ + e^-Γτ/1+e^-Γτ1 -(-e^-Γτ)^N_p/1 + e^-Γτ
and for sufficiently large N_p, when the exponentially small terms can be omitted, this yields
∑_M=0^N_p-1ρ_0(M) ≈N_p/1+ e^-Γτ + e^-Γτ/(1 + e^-Γτ)^2,
The second required sum is
∑_M=0^N_p-1ρ_0(M) e^-γ_1 τ m_max,
and in order to evaluate it we use the same parametrization as above, M=2n for even M and M=2n+1 for odd M, with n=0,…,K-1. We pair the neighboring terms, i.e.
∑_M=0^N_p-1 ρ_0 (M) e^-γ_1 τ m_max
= [∑_ even M +∑_ odd M] ρ_0(M) e^-γ_1 τ m_max
= ∑_n=0^K-1[ ρ_0(2n) + ρ_0(2n+1)] e^-2γ_1 τ (K-n-1).
Since
ρ_0(2n) + ρ_0(2n+1) = 2 + e^-Γτ (2n+1) - e^-Γτ (2n+2) / 1 + e^-Γτ,
we obtain
∑_M=0^N_p-1 ρ_0 (M) e^-γ_1 τ m_max
= ∑_n=0^K-12 - e^-Γτ (2n+1) + e^-Γτ (2n+2) / 1 + e^-Γτe^-2γ_1 (K-n-1) τ
= e^-2γ_1 (K-1)τ/ 1 + e^-Γτ[2e^2Kγ_1τ-1/e^2γ_1τ-1.
+ . e^-Γτ(1- e^-Γτ) e^2K(γ_1-Γ)τ-1/e^2(γ_1-Γ)τ-1]
Substituting these results into Eq. <ref> for 𝒫_1, we obtain in the limit of large N_p
𝒫_1(ω) = 1/(1 + e^-Γτ)γ_0[ ( 1-e^-Γτ/Γ - e^-γ_0 τe^γ_2 τ-1/γ_2
+ e^γ_2 τ-1/γ_2 1-e^-γ_0 τ/e^2 γ_1 τ-1)
(N_p + e^-Γτ/1 + e^-Γτ)
- e^γ_2 τ-1/γ_2 1-e^-γ_0 τ/e^2 γ_1 τ-1 (
2 e^-N_p γ_1 τ - 1 /e^-2 γ_1 τ - 1 + (e^-Γτ - e^ - 2 Γτ)e^-N_p γ_1 τ/e^-2 γ_1 τ - e^-2 Γτ)
]
and the net absorption spectrum is obtained as
Q(ω) = 2 A^2 Re{𝒫_2(ω) - 𝒫_1(ω) }
= 2 A^2 Re{𝒫_3(ω) -2𝒫_1(ω) }
with the explicit analytical expressions for 𝒫_1 and 𝒫_3 given above.
In Fig.<ref> we show a comparison of the numerical result and the analytical result described above for the absorption spectrum of a two-level system with detuning Δ = 3.0 driven by a periodic pulse sequence of period τ = 0.2 after 20 pulses. The comparison reveals a very good agreement between the solutions, and the agreement indeed improves as the number of pulses increases.
|
http://arxiv.org/abs/1701.07959v1 | 20170127071601 | Nonperturbative SU(3) thermodynamics and the phase transition | [
"N. O. Agasian",
"M. S. Lukashov",
"Yu. A. Simonov"
] | hep-ph | [
"hep-ph",
"hep-lat"
] | |
http://arxiv.org/abs/1701.07439v2 | 20170125190003 | Ionizing spectra of stars that lose their envelope through interaction with a binary companion: role of metallicity | [
"Y. Gotberg",
"S. E. de Mink",
"J. H. Groh"
] | astro-ph.SR | [
"astro-ph.SR"
] |
Ionizing spectra of stars that lose their envelope through interaction with a binary companion
Götberg, De Mink & Groh
Anton Pannekoek Institute for Astronomy, University of Amsterdam, 1090 GE Amsterdam, The Netherlands
Y.L.L.Gotberg@uva.nl, S.E.deMink@uva.nl
School of Physics, Trinity College Dublin, The University of Dublin, Dublin 2, Ireland
jose.groh@tcd.ie
Understanding ionizing fluxes of stellar populations is crucial for various astrophysical problems including the epoch of reionization. Short-lived massive stars are generally considered as the main stellar sources. We examine the potential role of less massive stars that lose their envelope through interaction with a binary companion. Here, we focus on the role of metallicity (Z). For this purpose we used the evolutionary code MESA and created tailored atmosphere models with the radiative transfer code CMFGEN.
We show that typical progenitors, with initial masses of 12, produce hot and compact stars (∼ 4, 60–80 kK, ∼1). These stripped stars copiously produce ionizing photons, emitting 60–85% and 30–60% of their energy as HI and HeI ionizing radiation, for Z=0.0001–0.02, respectively. Their output is comparable to what massive stars emit during their Wolf-Rayet phase, if we account for their longer lifetimes and the favorable slope of the initial mass function. Their relative importance for reionization may be further favored since they emit their photons with a time delay (∼ 20 Myrs after birth in our fiducial model). This allows time for the dispersal of the birth clouds, allowing the ionizing photons to escape into the intergalactic medium.
At low Z, we find that Roche stripping fails to fully remove the H-rich envelope, because of the reduced opacity in the subsurface layers. This is in sharp contrast with the assumption of complete stripping that is made in rapid population synthesis simulations, which are widely used to simulate the binary progenitors of supernovae and gravitational waves. Finally, we discuss the urgency to increase the observed sample of stripped stars to test these models and we discuss how our predictions can help to design efficient observational campaigns.
Ionizing spectra of stars that lose their envelope through interaction with a binary companion: Role of metallicity
Y. Götberg1 S. E. de Mink1 J. H. Groh2
Received ......; accepted ......
===================================================================================================================
§ INTRODUCTION
Massive stars have played many important roles since the earliest epochs of star formation. These stars shape, heat, and stir their surroundings and play a key role in driving the evolution of their host galaxies <cit.>. Over cosmic time, subsequent generations of massive stars chemically enriched the Universe with elements synthesized by nuclear fusion <cit.>, slowly increasing the average metallicity (i.e. mass mass fraction of elements heavier than H and He) of subsequent stellar populations.
During their short lives, massive stars copiously produce (far) ultraviolet (UV) photons. Of particular interest are their photons with wavelengths shorter than the Lyman limit (λ < 912 Å, i.e. with energies exceeding the ionization potential of hydrogen, hν > 13.6 eV). In the local Universe, massive stars are observed to ionize their immediate surroundings, giving rise to luminous HII regions <cit.>. At larger distances, they dominate the rest-frame UV part of the integrated spectra of star-forming galaxies and give rise to various strong emission lines <cit.>. Going back further in distance and time, the early generations of massive stars were the prime sources of ionizing photons emitted by the first galaxies <cit.>. These galaxies are held responsible for the large-scale phase transition, known as the Epoch of Reionization, during which the intergalactic neutral hydrogen gas became ionized <cit.>.
Massive stars are frequently found in binary or multiple systems <cit.>. Recent studies have shown that, in the majority of cases, massive stars have a companion that is so close that severe interaction between the two stars at some point during their lives is inevitable <cit.>. Similar conclusions have been reached by various groups using different datasets <cit.>.
Binary evolution can lead to many complex evolutionary paths involving one or more phases of mass exchange between the two stars <cit.> and possibly a merger of the two stars. This has drastic consequences for the observable properties of both stars, their remaining lifetime and final fate. This raises the question about the implications for the integrated spectra of stellar populations, and their ionizing fluxes in particular.
The most widely used spectral population synthesis codes, Starburst99 <cit.>, GALAXEV <cit.> and FSPS <cit.>, do not account for the possible effects of interacting binary stars and their products. In these simulations the ionizing flux comes primarily from the most massive and luminous stars, which are short-lived. At birth these stars already emit a significant portion of their light at wavelengths shorter than the Lyman limit. The most massive and luminous single stars lose their hydrogen-rich envelope through stellar winds and eruptions. After a few million years they become Wolf-Rayet (WR) stars. During the WR phase, the stars have higher emission rates of ionizing photons, but as the stars are still hot during the long-lasting main sequence, the total contribution from main sequence O- and early B-type stars dominates the emitted ionizing photons from single stellar populations.
Pioneering efforts to account for the effects of binaries on the spectra of stellar populations have been undertaken by three major groups using the Brussels PNS code <cit.>, the Yunnan simulations <cit.> and the extensive BPASS grids <cit.> that have been made available to the community. These studies have shown that binary interaction can significantly impact the derived ages and masses of star clusters <cit.>, may help to explain the spectral features observed in Wolf-Rayet galaxies <cit.>, affect star formation rate indicators <cit.> and can significantly boost the ionizing flux <cit.>. There has also been interest in X-ray binaries in the context of ionizing radiation <cit.>.
One of the challenges for simulations that account for massive binaries is the scarcity of adequate atmospheric models for binary products. Spectral synthesis codes rely on precomputed grids of (1) stellar evolutionary models, which provide information about key properties such as evolution of the surface temperature, gravity, and composition as a function of time and (2) corresponding atmosphere models, which provide information of the emerging stellar spectrum. For single stars, extensive grids of atmospheric models have been produced to cover the various evolutionary phases <cit.>. In contrast, the coverage of atmosphere models for stellar objects that are exclusively produced through binary interaction is very sparse or even absent.
The studies that account for interacting binaries so far have adopted a variety of efficient approximations to treat atmospheres of binary products. These include the usage of atmosphere models originally intended for the evolutionary phases of single stars, possibly after rescaling them. Other approaches are extrapolation beyond the existing boundaries of the available spectral model grids or the most straightforward approachcolorred: simple estimates based on blackbody approximations.
In this work we focus on stars that lose most of their hydrogen-rich envelope through interaction with a companion. This produces very hot, compact helium stars <cit.>. These stars are exclusively produced in binary systems. They can be considered as low-mass counterparts to hydrogen-deficient WR stars, i.e. helium-burning stars with current masses ≳ 8, showing strong broad emission lines indicative of their strong stellar winds. The WR stars result from the evolution of very massive single stars that lose their envelope through strong winds or eruptive mass loss episodes. In contrast, the progenitors of the less massive stripped stars we are interested in do not have winds that are strong enough to remove the envelope. They can only lose their envelope as a result of interaction with a binary companion. They are related to the low-mass subdwarf O and B (sdO/B) stars, which have typical assumed current masses of about 0.5-1<cit.>. The stripped stars of focus in this study, close the sequence in mass between massive hydrogen-deficient WR stars and the low-mass subdwarfs.
Stripped stars represent a common and long-lived evolutionary phase for interacting binaries. Practically every interacting binary produces a hot stripped star directly after the first mass transfer phase ceases (if the two stars avoid immediate coalescence). These stars are normally powered by central helium burning, which is an evolutionary phase that accounts for roughly 10% of the lifetimes of stars. Their high temperatures and long lifetimes along with the expectation that they are common, make the stripped stars of interest as potential stellar source of ionizing radiation. Assessing their potential as ionizing sources requires reliable models of their atmospheres. At present, no suitable grids of atmosphere models are available that cover the high effective temperatures and high effective gravities that are typical for these stars.
This paper is intended as first in a series that will systematically explore the structural and spectral properties of stripped stars to evaluate their impact on the spectra of stellar populations. In this first paper we present a case study of the effect of metallicity on a very typical, initially 12, star that loses its envelope through interaction with a companion after completing its main sequence evolution and before completing central helium burning. This type of mass transfer, case B mass transfer, is the most common case of binary interaction. The specific choice of initial mass is empirically motivated by the observed stripped star in the binary system HD 45166. This allows us to build upon the work by <cit.> who extensively analyzed and modeled this system. We focus on the long-lived helium burning phase, which is most relevant for the integrated spectrum of stellar populations.
An additional objective of this study is to improve our understanding of the expected observable characteristics of stripped stars, which can be used to observationally identify these stars. Stripped stars are expected to be common, but very few have been observationally identified. The paucity of directly observed stripped stars – the apparent paradox of missing stripped stars – can be understood as the result of various selection effects that hinder their detection <cit.>. For example, stripped stars are expected to reside near their main sequence companion, which typically outshines them in the optical and near UV. Furthermore, these stars typically have masses that are too low and orbits that are too wide to cause observable Doppler variation that can be identified easily in the spectra of their companion. We discuss spectral features that can be used to overcome these biases. These spectral features can be used to guide targeted observing campaigns, which will increase the number of observed counterparts in the local Universe. Observed stripped stars will provide crucial test cases for the spectral model predictions and simultaneously provide constraints on the outcome of the first phase of binary mass transfer.
This paper is organized as follows. We describe the evolutionary and spectral models in sec:modelling. In sec:M12 we use our reference model to illustrate the physical processes that determine the structural and spectral properties of stripped stars. We continue by discussing the impact of metallicity on the evolution, the stripping process and the structure of the stripped star in sec:stripped_Z. In sec:CMFGEN_Z we discuss the effect of metallicity on the emerging spectra and the spectral features. In sec:discussion we discuss the broader implications. We finish with a summary and conclusion in sec:conclusions.
§ MODELING
§.§ Stellar evolution with MESA
We modeled the evolution of interacting binary stars using the 1D stellar evolution code MESA <cit.>. We focused on stars that lose their hydrogen-rich envelope through Roche-lobe overflow after they complete their main sequence and swell to become red giants <cit.>. This is the most common case of mass transfer. About a third of all massive stars are estimated to undergo this type of evolution <cit.>.
We started the evolution at the onset of core hydrogen burning using the nuclear network , which contains reactions relevant for hydrogen burning through the CNO cycle and non-explosive helium burning <cit.>. We followed the evolution through all long-lived phases until central carbon depletion (defined as X_C, c < 0.02). These evolutionary stages together account for 99.9% of the stellar lifetime.
We accounted for convective mixing using the mixing length approach <cit.> adopting a mixing length parameter α _MLT = 2. We allowed for overshooting above every convective burning region up to 0.335 pressure scale heights above the convective region, following the calibration by <cit.>. The semiconvection parameter is set to α _sem = 1 <cit.>. We also accounted for thermohaline mixing <cit.> and rotational mixing <cit.>, but neither of these play a significant role in the models of the mass losing primary star presented here.
We used the <cit.> wind mass loss algorithm for effective temperatures estimated by MESA, T_eff, mesa > 10^4 K and surface hydrogen mass fraction, X_H, s> 0.4. This prescription scales with metallicity as Ṁ∝ Z^0.85. In the models presented here, these conditions are met throughout the main sequence evolution and the post-main sequence evolution before the onset of mass transfer.
After stripping, most of our stars have a surface hydrogen mass fraction X_H, s< 0.4 and then we switched to the empirically determined WR mass loss algorithm of <cit.>. This prescription scales with luminosity, surface helium abundance, and metallicity, where the relation with metallicity is Ṁ∝ Z^0.5 <cit.>[<cit.> have found a stronger metallicity dependence from observed WR stars, which would imply weaker winds in lower metallicity environments.]. Lacking an appropriate wind mass loss algorithm for the winds of stripped stars, we extrapolated this algorithm and applied it to the lower luminosities of stripped stars compared to WR stars. The observationally derived wind mass loss rate of the observed stripped star in HD 45166 <cit.> is in good agreement with the rate estimated by extrapolating the <cit.> WR algorithm. This algorithm almost two orders of magnitude higher than values one would obtain by extrapolating the recent results by <cit.> for hot, subluminous stars. We found that their mass loss algorithm would give a mass loss rate that is too low to be consistent with the observed spectral characteristics for HD 45166. We therefore chose to adopt <cit.>. Throughout this work we considered variations in the adopted wind mass loss rate for stripped stars to understand the effects of wind mass loss rate.
We initiated our models at the zero-age main sequence without rotation. The donor star never reaches high rotation rates throughout its evolution <cit.>. Although we do not discuss the evolution of the accreting star here, we still follow its evolution. During mass transfer the accreting star spins up. We assumed the accretion mechanism described in <cit.> where, when the secondary star rapidly reaches critical rotation during the mass transfer stops accreting significant amount of material. Non-conservative mass transfer follows, during which we assumed that the mass not accreted by the secondary is lost from the system with the specific angular momentum of the orbit of the secondary. The efficiency of mass transfer is uncertain <cit.>, but the properties of the primary star are not significantly affected by the details of the treatment of the secondary star and the accretion efficiency.
We used the parameters derived for the observed binary system HD 45166 as a motivation for the chosen starting point of our parameter space exploration. HD 45166 consists of a 4.2 ± 0.7 M_⊙ quasi-WR (qWR) star that is orbiting a companion of spectral type B7V. The spectral type of the companion is consistent with a main sequence star of about 4.8 M_⊙ <cit.>. The qWR star is believed to be the stripped remnant of an initially more massive star. We found that the parameters of this system can be approximately reproduced adopting M_1,init = 12 M_⊙ and M_2,init = 5 M_⊙. The initial mass of the secondary is not as well constrained as the primary. It depends on the efficiency of mass transfer. However, here we are primarily interested in the effect on the primary star. The precise choice of the companion mass does not have large effects on the outcome of the primary star after stripping.
In this work we investigated the effect of metallicity. In a subsequent paper we will discuss the dependence on the further system parameters. We computed a grid of binary systems with these initial stellar masses and adopted an initial orbital period of 20 days. We varied the metallicity between Z = 10^-4 up to Z = 0.02. This corresponds to values of the extremely low metallicity dwarf galaxy IZw18 <cit.> up to super-solar values. The relevant metallicity for HD 45166 is Z = 0.0166 <cit.>. In terms of [Fe/H] these models span between -2.18 and 0.16, while in terms of A(O) the models span between 6.6 up to 8.9.
For the initial helium mass fraction we assumed Y = 0.24 + 2 Z following <cit.>, which approximately spans the range between an approximately primordial chemistry and a near solar abundance. For hydrogen we assumed X = 1-Z -Y. The abundances of the heavier elements were assumed to scale to solar and meteoric abundance ratios as determined by <cit.>. It is likely that the relative metal fraction is not solar for all metallicities. However, for a fixed iron mass fraction we expected the effects of CNO variations to be of small impact on the ionizing output, but could affect the strength of carbon, oxygen and nitrogen lines. The initial helium mass fraction could impact the compactness and luminosity of the stars and could be of higher relevance. tab:hestar_prop provides an overview of the main parameters of the evolutionary models presented in this work.
§.§ Stellar atmospheres with CMFGEN
We used the radiative transfer code CMFGEN <cit.> to model the spectra of stripped stars tailoring them to the MESA stellar evolution models. The CMFGEN code takes into account gas out of local thermodynamic equilibrium (non-LTE). The CMFGEN code is a necessary tool for modeling optically thick stellar atmospheres, which is something not taken into account by for example Kurucz models <cit.>. The presence of an extended atmosphere and an optically thick wind changes the optical depth scale, affecting the effective temperature and effective surface gravity (both are defined at a radius where the Rosseland optical depth is τ = 2/3). Each CMFGEN model provides the spectral energy distribution and a normalized spectrum which we computed between 50 and 50 000 Å. We included the elements H, He, C, N, O, Si and Fe. Additional elements are not expected to show dominating spectral features as their abundances are low. Our choice of the atomic model is a compromise between computational speed and accuracy.
The observed stripped star in HD 45166 is measured to have the wind speed of v_∞ = 350 ± 40km s^-1 when using a wind velocity beta law with β = 1 <cit.>. This wind speed is surprisingly low for a star with T_eff≃ 50 000 K. There is evidence of a latitudinal-dependent wind in HD 45166 <cit.>. The spectral lines are consistent with the presence of a fast polar wind (v_∞≃ 1 300km s^-1) and a slow equatorial wind with (v_∞≃ 300km s^-1), which could reconcile observations and theoretical expectations of hot-star winds. The derived clumping volume filling factor is 0.5 <cit.>. We used these measurements of wind parameters in the qWR in HD 45166 for our spectral models of stripped stars.
Adopting higher terminal wind speeds would decrease the derived optical depth of the wind and also yield broader and weaker lines if the mass loss rate is kept constant. For a constant mass loss rate, lower clumping volume filling factors would increase the equivalent width of recombination lines such as HeII λ4686. We discuss the effects of the wind terminal velocity and clumping factors on the spectrum of stripped stars in app:vinf.
§.§ Connecting atmospheres to structure models
We employed the method described in <cit.> to connect the MESA stellar structure with CMFGEN. Here we briefly describe the approach and refer to the paper above for further details. For alternative approaches, see <cit.>, <cit.>, and <cit.>.
We took a structure model from the evolutionary sequence computed with MESA, when the stripped star is halfway through core helium burning, which we defined as the moment when the central helium mass fraction drops to X_He, c = 0.5. As input for the CMFGEN models at Rosseland optical depth τ = 20, we used the effective temperature (T_eff, mesa), surface gravity (log _10 g_mesa), stellar radius (R_mesa) and surface abundances extracted from our MESA model following the approach extensively tested by <cit.>. From the CMFGEN models we extracted the true effective temperature, T_eff (τ = 2/3), and the surface temperature of the star, T_⋆, defined as the effective temperature computed at the radius where the optical depth τ=100. Our standard wind mass loss models have the wind mass loss rate from the <cit.> WR algorithm, with the exception of the Z ⩽ 0.0002 models which are slightly cooler and have the wind mass loss rate of <cit.>. Given the uncertainties in the mass loss rate, we also computed CMFGEN models with three times higher (labeled “enhanced”) and three times lower (“reduced”) mass loss rates than our standard values.
fig:stitch_M12 shows the connection between the MESA model and corresponding three CMFGEN models for our reference model (Z = 0.0166). The connection is not perfect, but more than sufficient for a reliable prediction of the emerging spectra. The insets in fig:stitch_M12 highlight the difference Δ r in stellar radius calculated in MESA and CMFGEN. This discontinuity translates into an uncertainty in the effective temperature of only ∼100 K. This is less than 1% of the surface temperature, which ranges between 50 000 and 80 000 K (see tab:hestar_prop). Differences of this order are so small that their influence on the predicted emerging spectra an ionizing flux is negligible.
§ EVOLUTION AND SPECTRAL FEATURES OF STRIPPED STARS
The consequences of mass loss due to interaction with a companion in a binary system have been the topic of several classic papers <cit.>. We discuss the results obtained with modern higher resolution simulations using updated input physics, but several of the physical arguments concerning the evolution can already be found in these early papers. More recent works on this topic includes <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, and <cit.>.
We used a 12 model as an example of the primary in a typical massive binary system. The complete set of adopted parameters are listed and motivated in sec:modelling_MESA. We assumed an initial separation such that the system evolves through case B mass transfer. This concerns systems in which the primary star fills its Roche lobe shortly after leaving the main sequence, typically before the onset of helium burning. Case B mass transfer is the most common type of mass transfer. Moreover, it produces long-lived post-interaction products, which have not yet completed their helium burning phase. This makes the reference model discussed here useful as a starting point for our exploration. We adopted a metallicity that is comparable to that found in nearby young star clusters and OB associations in the Milky Way. More precisely, we adopted the metallicity measured for the HD 45166, which is Z = 0.0166 <cit.>. The effects of metallicity are discussed in the next section.
§.§ Binary evolution and the formation of a stripped star
In fig:HRD_M12 we show the evolutionary track computed with MESA of a 12 single star in gray, together with the evolutionary track for the primary star in our binary reference model. Initially, the two stars evolve similarly as they move along the main sequence (labeled A-B). The central helium abundances X_He, c (plotted in color) steadily rise as the both stars fuse hydrogen into helium in their convective cores.
After about 18.7 Myr both stars have exhausted their central fuel. Nuclear burning of hydrogen continues in a shell around the helium core and both stars expand. The single star expands freely to become a red supergiant, reaching a final radius in excess of 1000 . In contrast, the primary star in our binary model fills its Roche lobe at point C and starts to rapidly lose mass. This can be seen in the left panel of fig:Kipp_M12, where we plot how the mass changes as a function of time. We also show the mass coordinates of the regions in which nuclear burning takes place (blue shading) and the interior regions that are mixed by convection and overshooting (green diagonal lines and purple cross hatching, respectively).
The primary star has a radiative envelope at the onset of mass transfer. As the outer, highest entropy layers are removed, the envelope initially responds on a dynamical timescale by shrinking. On a longer, thermal timescale, the star is still trying to expand and cross the Hertzsprung gap leading to continued stable mass transfer. The Roche lobe limits the size of the star.
During the mass transfer phase the luminosity drops by more than an order of magnitude. This is because the deeper layers of the star need to expand as they adjust to the quickly decreasing total mass. The energy needed for this causes a brief, large drop in the luminosity. This continues until the maximum mass loss rate is reached (D). Once the mass loss rate starts to decrease again, the readjustments in the thermal structure require less energy so luminosity can again increase.
At point E the donor star has lost more than 8 . The surface layers that are now exposed are helium-rich layers that were once part of the convective core. The star detaches and mass transfer stops. The now stripped core of the primary contracts and heats up until thermal equilibrium is restored at point F. The central regions have now reached temperatures that are high enough to ignite helium burning, which burns in the convective core to carbon and oxygen.
§.§ Characteristics and evolution of the stripped star
Our main phase of interest is the helium burning phase of the stripped star (F). This is the longest lasting evolutionary phase after the main sequence, accounting for almost 10 % of the total lifetime in our model. Given the high fraction of young stars in close binaries that undergo similar evolution, one would expect on average a few percent of all stars to reside in this phase at any given time.
Despite having lost over two-thirds of its mass, from 12 initially down to 3.6 after stripping, the bolometric luminosity, log_10 (L/) ∼ 4.2, is still comparable to the luminosity it had during its main sequence evolution. For an overview of the properties, see tab:hestar_prop. The star is very compact ∼ 0.7, has a very high surface gravity, log_10 g_ eff∼ 5.3, and is very hot.
With an effective temperature ∼80 000 K, the radiation emitted by the stripped star is expected to peak in far/extreme UV. In fig:HRD_M12 we indicate the fraction of photons emitted at wavelengths blue-ward of λ⩽ 912 Å (Lyman continuum photons) by a blackbody with the corresponding effective temperature, with blue vertical contours. A blackbody with a temperature of 80 000 K, similar to the effective temperature of our stripped star, emits about 85 % of its radiation in Lyman continuum photons. For comparison, the equivalent single star model (shown in gray in fig:HRD_M12) has an effective temperature of only ∼3 000 K during its helium burning phase. A single star of this mass does not emit a significant number of ionizing photons throughout its entire life.
Initially, we find that a thin shell containing a mixture of hydrogen and helium remains at the surface. This shell contains less than 0.1of hydrogen. This is sufficient to sustain a weak hydrogen burning. The burning shell quickly moves outward in mass coordinate as it converts H into He as can be seen in fig:Kipp_M12. At the same time mass is lost from the surface by the stellar wind at a rate of Ṁ∼ 2.5 ×10^-7 yr^-1 in this model. This is high enough to remove the hydrogen layer by the time core helium depletion is reached. See also panel e of fig:evol_strip, which shows the evolution of the surface abundance of hydrogen and helium as a function of time.
When central helium burning ceases, the entire star contracts and heats up to point G, where the helium shell ignites. The C/O core continues to contract with the helium burning shell on top. The envelope responds by expanding until the star reaches balance at point H. During the expansion, carbon and later also oxygen ignite in the core. We stopped the computations at central carbon depletion. The following phases of the evolution are so fast that the luminosity and effective temperature do not have time to change significantly any more. With a helium layer of about 1.3 M_⊙ and almost pure helium on the surface, the star is expected to end its life as a H-deficient (type Ib) supernova.
§.§ Resulting spectrum of a stripped star
We computed a tailor-made atmosphere model with CMFGEN for the stripped star computed with MESA at the moment when it is halfway through helium burning, using the procedure described in sec:tailor. fig:SED_M12 shows the resulting spectrum (black line). The wavelengths corresponding to photon energies required to ionize neutral hydrogen (HI), singly ionize helium (HeI), and fully ionize helium (HeII) are indicated (vertical dotted white lines). For comparison, we overplot a blackbody spectrum for a temperature of ∼80 000 K (gray dashed line), corresponding to T_ eff, mesa, which is the computed effective temperature resulting from the MESA model.
The blackbody approximation strongly overestimates the extreme UV flux (shortward of 228Å) and underestimates the flux at peak, which occurs between 228-512Å. It also underestimates the emission at longer wavelengths, these are most clearly visible in fig:SED_M12 for wavelengths longer than about 4000Å, where reprocessed ionizing photons are emitted at longer wavelengths in the form of recombination lines. The excess in infrared and radio wavelengths compared to the blackbody is due to free-free emission. The sharp drop at λ = 228 Å is due to the recombination of He^2+ to He^+ in the outer parts of the wind, allowing for He^+ to absorb a significant fraction of high-energy photons. We do not observe a similar drop at the hydrogen ionizing limit (912 Å), since the hydrogen is completely ionized throughout the wind.
In fig:ionisation_structure_M12, we show the ionization structure of hydrogen and helium as a function of the distance from the stellar surface. For helium we find a helium recombination front at a distance of about 80 stellar radii (panel e). For our standard model, we find that hydrogen is fully ionized out to a distance of about 100 R_⋆, which is the computational domain considered (panel b). At larger distances the density drops further and therefore we do not expect hydrogen to recombine. This assures that no major changes to the spectrum are caused by the wind outflow at larger distances than considered in our model.
Varying the adopted mass loss rate in the CMFGEN models has a large effect on the extreme UV flux as can be seen in fig:SED_M12. Increasing the mass loss rate enhances the density in the wind, effectively making it more optically thick in the continuum. This moves the photosphere outward and reduces the effective temperature. In the case of our model with enhanced mass loss, we find that the photosphere is located at 0.99 , while the stellar surface is located at 0.68 . The resulting effective temperature, T_ eff∼70 000 K, is almost 10 000 K lower than the temperature at the stellar surface. This effect is commonly observed for the the higher mass counterparts to stripped stars, WR stars, which typically have dense, optically thick winds. For the models that adopt our standard and reduced mass loss rates we find no significant difference between the effective and the surface temperature, indicating that in these cases the wind is optically thin in the continuum. Many spectral lines are however optically thick, as we discuss in sec:spectral_features_M12.
Considering the relation between wind mass loss rate and radiation pressure through the Eddington factor (Γ_e), we expect a realistic wind mass loss rate in the considered parameter range to be closer to that used in the "reduced" mass loss model <cit.>. However, these estimates were derived from more massive stars and therefore also included an extrapolation to the lower mass regime of the presented stripped stars.
§.§ Characteristic spectral features
To show the spectral features more clearly, we plot the normalized spectra in fig:spectra_M12 for our standard, enhanced and reduced mass loss rates. Our CMFGEN models show emission lines with parabolic shapes, which is typical for lines that are created in the wind and are optically thick.
The strongest line in the optical spectra is the HeII λ4686 with an equivalent width of 95.6 Å and with flux at the peak corresponding to more than 10 times that of the neighboring continuum in the standard wind mass loss model (see enlargement in panel e of fig:spectra_M12).
Other strong lines in the optical band are the blend of HeII λ6560 and Hα (panel f) and a mix of NIV lines in the range 7100-7140 Å (panel g). We also find weaker emission in CIV λ5801 and CIV λ5812. In the enhanced mass loss model HeI λ5875 shows up owing to the higher abundance of He^+ in the wind compared to the lower mass loss models. The strongest line of the UV spectrum is HeII λ1640, but also the Lyα and HeII λ1215 blend, NIV λ1487 and NIV λ1719 are strong in emission. In the infrared we find the strongest lines to be HeII λ18636, λ30908 and λ47622.
Varying the mass loss rate affects the strength of the emission lines. Lower mass loss rates consistently show weaker lines. If stripped stars had mass loss rates that were even lower than those presented here, our models indicate that they would show little or negligible emission, and mostly an absorption-line spectrum.
Stripped stars share common characteristics with central stars in planetary nebulae (CSPNe) <cit.>. However, they typically have very high surface gravity, implying broader lines compared to stripped stars. We also expect them to generally be of lower mass and post central helium burning objects.
§ EFFECT OF METALLICITY (I): THE FORMATION AND STRUCTURE OF STRIPPED STARS
Numerous studies have discussed the effect of metallicity on the structure and evolution of single stars <cit.>. Summarizing these works, we can distinguish three main effects of metallicity on (1) the opacity, most notably in the subsurface layers, (2) the nuclear burning rate, especially of hydrogen fusion through the CNO cycle; and (3) the mass loss rates for radiatively driven winds. Here, we describe how the different effects of metallicity interplay in case of the mass losing star in a binary system. We consider metallicities between Z = 0.0001 and Z=0.02. In the sections below, we discuss the consequences for the pre-interaction phase, the removal of the envelope, the resulting characteristics of the helium star, and its emerging spectrum.
§.§ The pre-interaction phase
During the pre-interaction phase both stars effectively evolve as single stars. Throughout the main-sequence evolution, we find that metal-poor stars are hotter, more compact and a little more luminous, as can be seen in fig:HRD_Z. This is consistent with what has been found for single stars as described in the studies mentioned above.
This is, in part, due to the effect of metallicity on the nuclear reaction rates for hydrogen burning through the CNO cycle, as already pointed out in the earliest papers on the subject <cit.>. At lower metallicity, the catalysts for the CNO cycle are more scarce and hydrogen burning is less efficient. The star compensates by contracting, which increases the central temperature. This, in turn, raises the nuclear reactions rates to levels that are required for the star to be in thermal equilibrium. The result is a hotter, more luminous star.
These days, stellar evolutionary calculations adopt more realistic opacity tables <cit.>, which have a more sophisticated treatment of the many bound-bound and bound-free transitions due to metals. These are most important in the cooler subsurface layers of the star, where heavy elements are only partially ionized. The most prominent example is the so-called iron peak, which plays a role in the subsurface layers at log_ 10 T ∼ 5.2 <cit.>. The lower opacity in our metal-poor models contributes to the fact that they are hotter and more compact.
In the deep interior, electron scattering is the dominant source of opacity, which does not depend on the metallicity, at least not directly, as κ_ es = 0.2 (1+X_ H) cm^2 g^-1. However, when initializing our models we chose to scale the initial mass fraction of hydrogen and helium with metallicity. The hydrogen abundances decrease as the metallicity rises in agreement with the overall trend expected for the chemical enrichment of galaxies over cosmic time. This way we introduced a mild indirect dependency of κ_ es on the metallicity. Our metal-poor models initially have a ∼ 8 % higher value for X_ H than our metal-rich models.
The effect on the opacity in the interiors is small, but of importance since the electron scattering plays a role in determining the extent of the convective core, which in turn determines how much fuel the central burning regions can access. fig:Kipp_M12 shows the extent of the convective core in mass coordinate. Our metal-poor model indeed has a more massive convective core at the start of its evolution because of a combination of the effects discussed above.
We can further see the effect of the metallicity dependence of mass loss by radiatively driven stellar winds. Our metal-rich model loses a few percent of its mass over the course of the main sequence evolution (labeled A-B in fig:Kipp_M12). Our low metallicity model does not show any significant mass loss by stellar wind.
Both the extent of the convective core and stellar wind mass loss have consequences for the final mass of the helium core when the star leaves the main sequence M_He core, TAMS. We find a small difference in mass of about 6 %, with the metal poor star having the more massive core (M_He core, TAMS = 3.14 and 2.95 , respectively) in addition to subtle differences in the chemical profile above the helium core. The slightly larger core mass and larger total mass for our metal-poor models explain why our metal-poor stars are substantially brighter when they leave the main sequence as can be observed when comparing the location of the characteristic hook feature that marks the end of the main sequence (see fig:HRD_Z).
In fig:MESA_hestar_prop (panel a) we compare the main sequence lifetimes of the various metallicity models. Reducing the metallicity from 0.02 down to ∼0.0021 increases the lifetime by about 8%, owing to the amount of nuclear fuel that the star can access. When we reduce the metallicity further to 0.0001, this effect is saturated. The various effects that lead to a higher luminosity start to dominate. The stars effectively burn faster through their available fuel. We find that the main sequence lifetime slightly decreases again.
When the star leaves the main sequence it continues to burn hydrogen in a shell around the core. The shell briefly drives a convection zone, which influences the details of the shape of chemical profile above the core, but we find that it has no influence on the stripping process for the models presented here. For our most metal-rich model, we find that a region of about 1above the core is partially burned, when the star leaves the main sequence. For our most metal-poor model we find that the region extends to about 1.5above the core.
§.§ Onset of Roche-lobe overflow and the removal of the envelope
The stars we consider here fill their Roche lobe shortly after leaving the main sequence during the hydrogen shell burning phase. Throughout their pre-interaction evolution, the metal-poor models have been more compact. They need to expand further in order to fill their Roche lobe. Effectively, they fill their Roche lobe at a slightly more advanced evolutionary stage.
The size of the Roche lobe is approximately the same at the moment the primary fills its Roche lobe for the first time, about 27in all our simulations. There is a small difference because of the metallicity-dependent stellar wind mass loss rates. Mass loss from our metal-rich systems in the form of fast stellar winds leads to widening of the orbit and thus increasing the Roche lobe. This effect is partially compensated by the reduction of the mass ratio, q = M_2/M_1, which reduces the relative size of the Roche lobe of the donor. The net result is that the metal-rich models are slightly larger (2 %) at the time they fill their Roche lobe. However, we do not expect this to play a role of significance. When conducting test experiments varying the binary separation and mass ratio we find variations in outcome that are negligible.
The process of mass stripping by the companion (marked with a black outline in fig:HRD_Z) proceeds in a similar way in all models; see sec:M12_MESA. Initially, the stars respond by contracting rapidly on a dynamical timescale and they subsequently expand on the slower thermal timescale of the outer layers. However, the dynamical phase is not modeled explicitly, but enforced by the assumption of hydrostatic equilibrium. The mass transfer rate peaks at slightly higher values in the metal-poor models. Eventually, when most of the envelope is removed and helium enriched layers are exposed to the surface, there is a certain point at which the star is no longer able to expand in response to any further mass removal. At this point the stars detach as the donor starts to contract.
For our metal-rich models, we find that a larger amount of mass removal is needed to reach this point. Our metal-poor donor stars are still about 4.9when they detach, more than a solar mass more than our metal-rich models which are only about 3.8at this moment. For our metal-rich models the helium surface mass fraction at the moment the two stars detach is about 0.75; this compares to about 0.55 in our metal-poor models.
We expect that the reduced opacity in the outer layers of the metal-poor models is an important factor. In the subsurface layers where the iron peak plays a role, we find that the opacity is about three times higher in our metal-rich models. In addition, also the subtle differences in the interior chemical profile that are inherited from the pre-interaction evolution play a role. A further effect that may contribute is the difference in central temperature. The metal-poor models have more massive and more compact cores, and thus higher central temperatures, which allows for the ignition of central helium burning before the star detaches from its Roche lobe. Our most metal-rich model is still primarily powered by H shell burning when it detaches.
After the stars detach we find that all models contract, fully ignite helium in their center if they had not already done so, and settle to their thermal equilibrium structure as a central helium burning star.
§.§ The resulting stripped star: The long-lived helium burning phase and further evolution
In the previous section we described that the stripping process is inefficient at lower metallicity: it fails to remove entire envelope. Our metal-poor models consist of a helium core surrounded by a remaining layer of envelope material of more than 1(right panel of fig:Kipp_M12). This layer consists of a mixture of hydrogen and helium, containing a total amount 0.38of pure hydrogen right after detachment from the Roche lobe. For our metal-rich model the remaining shell contains less than 0.05of hydrogen, which initially allows for weak burning in a shell around the core (left panel of fig:Kipp_M12). When investigating even lower metallicities (Z ≤ 0.00002), we find that the star never becomes hotter than its zero-age main sequence because of the large amount of leftover hydrogen after mass transfer.
This has consequences for the luminosity and effective temperature. The metal-poor stripped stars are about 30 000 K cooler, 0.3 dex brighter, and almost four times bigger than their metal-rich counterparts; see fig:MESA_hestar_prop and tab:hestar_prop. This is somewhat counterintuitive, since it is the opposite of what is found for single and pre-interaction stars. Before interaction, we find metal-poor stars to be more compact and hotter. After stripping we find metal-rich stars to be more compact and hotter.
The difference in mass, luminosity and composition affects the remaining lifetime. Panel b in fig:MESA_hestar_prop shows that the remaining lifetime increases with metallicity by about 20% (see panel b). This is because of the lower mass of the stripped stars at higher metallicity. The fact that our metal-poor stars still provide part of their luminosity through H burning in a shell does not prolong their life.
The further panels in fig:MESA_hestar_prop provide an overview of various properties of stripped stars, when they are halfway their helium burning phase (defined as the moment when X_He,c = 0.5). The shaded bands spans the variation in properties during the hot phase of their helium burning lifetime (which we define as 0.9 > X_He,c > 0.05).
In fig:evol_strip we compare the evolution as a function of time during the helium burning phase of our low metallicity model (Z=0.0002) and our metal-rich reference model. In the top row we compare the evolution of the effective temperature as a function of time. Plus symbols indicate the central helium mass fractions X_He,c = 0.9, 0.5 and 0.05 during core helium burning. For our metal-rich model, we see the first quick initial rise of the effective temperature, when the star is still contracting within its Roche lobe. Afterward, the effective temperature steadily rises by about 0.1 dex over the course of the helium burning phase. At the end of the helium burning phase, the temperature rises briefly until helium shell burning ignites and the star expands; this is visible in the diagram as a drop in the effective temperature. In contrast, our metal-poor model remains substantially cooler for the first part of the helium burning phase. The temperature slowly rises and settles at about log _10 (T_eff, mesa/K) = 4.7 only after a third of the helium burning phases.
In the second row in fig:evol_strip we show the luminosity produced by hydrogen and helium burning separately over time for our metal-poor and metal-rich model. Shortly after the end of Roche-lobe overflow, we see the helium burning luminosity quickly rises but hydrogen burning still provides most of the energy and exceeds the helium burning luminosity about 0.5 dex. As the star evolves along the helium burning main sequence, we see that the contribution of hydrogen burning quickly drops in the metal-rich model, as expected from our earlier discussion of the very thin shell that remains and the effect of the stellar winds. In contrast, the thick H-rich shell present in the metal-poor case remains actively burning hydrogen throughout the full helium burning phase. It does however weaken and after about 20% of the helium burning lifetime, helium burning takes over as the dominant source of energy. This is roughly around the same time at which the effective temperature of the star stabilizes, as can be seen in the panel above.
In the third row of fig:evol_strip we compare the evolution of the H and He surface mass fraction as a function of time. At the far left of the diagram, we see the quick reversal of H and He due to Roche-lobe stripping. In both cases He is the dominant element at the surface. In the case of the metal-rich star we see the effect of stellar winds slowly removing the outer layer containing H. After about two-thirds of the helium burning phase, the winds have removed the last remaining H and the surface abundance of H quickly drops to zero. We find this behavior in all our models with metallicity Z ⩾ 0.01.
In contrast, for the metal-poor model we see that the surface abundance of H and He are constant throughout the helium burning phase, since the wind mass loss is negligible. At the end of the evolution, during helium shell burning we find that the star fills its Roche lobe again. This removes part of the H-rich layer, but not all of this layer. At metallicities Z ⩽ 0.0047, the stripped stars have enough hydrogen-rich envelope left at this stage to fill the Roche lobe a second time. Because the remaining evolution is very rapid, these stars are likely to end their lives during mass transfer.
In the last row of fig:evol_strip we show the total mass of hydrogen present in the star. In the metal-rich case, the star is deeply stripped during Roche-lobe overflow. During the helium core burning phase the star loses more mass through stellar winds. As a result, the total hydrogen mass is M_H,tot∼ 0.05 right after the end of Roche-lobe overflow, but completely disappears before explosion owing to mass loss by winds. The metal-rich stripped star is expected to give rise to a type Ib supernova.
In the metal-poor case, the Roche-lobe overflow phase leaves M_H,tot∼ 0.35 . Mass loss by winds is negligible in this case, but shell burning significantly decreases the amount of hydrogen by converting it into helium. At the end of the helium burning phase ∼ 0.2 is left. During helium shell burning the stripped star swells up and fills its Roche lobe a second time. The second phase of mass transfer decreases the total hydrogen mass to ∼ 0.03 at the end of our calculations. When the metal-poor stripped star ends its life, it still thus shows signatures of hydrogen in the very early spectra of the supernova.
§ EFFECT OF METALLICITY (II): THE EMERGING SPECTRA OF STRIPPED STARS
In the previous section, we showed that stripped stars at higher metallicity have hotter surfaces, are less luminous, have stronger stellar winds, and contain less hydrogen at their surface. Here, we discuss how the combined effects of metallicity influence the spectral energy distribution and the characteristic spectral features. For this we use our CMFGEN atmosphere simulations created for the stripped stars discussed in the previous section, taken when they are halfway through their central helium burning phase (X_ He, c = 0.5).
§.§ Spectral energy distribution and flux of ionizing photons
In fig:SED_Z we show the spectra in conventional units (λ F_λ at 1 kpc in erg s^-1 cm^-2 versus λ in Å, where λ is the wavelength and F_λ the flux emitted at that wavelength) for stripped stars with metallicities between Z = 0.0001 and 0.02. The flux of all spectra peak in the extreme UV at wavelengths between the thresholds for ionization of HeI and HeII (indicated with vertical dashed lines). The metal-rich models peak at slightly shorter wavelengths. This is due to their higher surface temperatures. The metal-rich models also have stronger and denser winds. In principle, this can move the photosphere outward to larger radii, resulting in a reduction of the effective temperature that characterizes the emerging spectrum. However, for our standard mass loss rates we find that the winds are transparent in all cases. The radius of the stellar surface, R_⋆, and the photosphere, R_eff, coincide; cf. tab:hestar_prop. The metal-poor models are brighter and show the distinctive trough at 912 Å at the threshold for hydrogen ionization. This feature is weaker in the metal-rich models. They still have traces of hydrogen at their surface abundances (X_ H, s∼ 0.2, but this is completely ionized).
Our spectral models show that stripped stars are very efficient emitters of ionizing photons at all metallicities. In our high metallicity reference model we find that the stripped star emits 85% of their energy as HI ionizing photons, 60% as HeI ionizing photons, and 5 × 10^-6 % as HeII ionizing photons. In fig:Q_Z we show the metallicity dependence of the total number of ionizing photons emitted per second, Q_0, Q_1, and Q_2 at wavelengths shorter than the ionization potential for HI, HeI and HeII, respectively; see also tab:Q_Z. We show the results for the three variations adopted for the stellar wind mass loss rates: standard, three times enhanced, and three times reduced. For reference, we also show an estimate for a blackbody spectrum with the same temperature as the stellar surface, i.e. the effective temperature given by the MESA models (open symbols).
§.§.§ HI ionizing photons
The flux of HI ionizing photons (left panel of fig:Q_Z) is about 10^48 s^-1 and rises by about a factor two with decreasing metallicity. This can be understood as the result of two effects that counteract each other. As we have shown in sec:evol_Z_rlof, the metal-poor models are not completely stripped by Roche-lobe overflow; a hydrogen-rich layer is left at the surface. This results in stripped stars that are more massive, luminous and slightly cooler (see fig:MESA_hestar_prop). The higher luminosity favors the production of ionizing photons, while the lower temperatures disfavor it. The net effect is a mild increase of ionizing photons with decreasing metallicity with a peak at Z=0.0002.
We only find a variation of ≲ 2% in Q_0 when varying the assumed mass loss rate. Our results are thus robust against uncertainties in the mass loss rate.
A simple blackbody estimate for hydrogen ionizing flux (open symbols) based on the surface temperature given by MESA is remarkably accurate. It overestimates Q_0 only by about 10%. This is potentially interesting, since the blackbody approximation provides a computationally cheap alternative for the detailed atmosphere simulations that we conducted. The reason for the small difference between the detailed simulations and the blackbody estimate is that hydrogen is almost completely ionized throughout the wind (see fig:ionisation_structure_M12). Without neutral hydrogen present, the hydrogen ionizing photons cannot be used for hydrogen ionization in the wind and propagate into the surroundings.
§.§.§ HeI ionizing photons
We find very similar trends for the flux of HeI ionizing photons (central panel of fig:Q_Z), which is only a factor of two below the HI ionizing flux. The HeI ionizing flux peaks at a metallicity Z = 0.001 and also closely follows the estimate from a simple blackbody. This is because neutral helium is also almost completely depleted in the wind. The detailed spectral simulations give a 20% higher flux than the simple blackbody estimate. The predictions for the HeI ionizing flux are also robust against uncertainties in the mass loss rates.
§.§.§ HeII ionizing photons
The flux of HeII ionizing photons (right panel of fig:Q_Z) is strongly reduced and is 4-8 orders of magnitude below the estimates for the HI and HeI ionizing photon flux (the scale is different on the vertical axis in the three different panels). The results are extremely sensitive to the assumed mass loss rate (shaded band), giving variations of almost 6 orders of magnitude at high metallicity. This is because the helium recombination front is very sensitive to the assumed mass loss rate and therefore we cannot accurately estimate the emission rate of HeII ionizing photons. We find that the sensitivity our multiplicative variations in the mass loss rate are only reduced for very low metallicity (Z ≤ 0.0002) where the stellar winds become insignificant, but still leads to variations of an order of magnitude.
A blackbody estimate for HeII ionizing flux is not appropriate. It overestimates the flux by about two orders of magnitude at least. The reason is that helium is not completely ionized throughout the wind and in the outer parts the density is high enough and temperature low enough for He^2+ to recombine to He^+. This leads to high-energy photons that are used for ionizing the wind instead of emerging to the surroundings. We conclude that the HeII ionizing photons are too uncertain at present to provide meaningful quantitative predictions.
§.§ Spectral features
The spectra of metal-rich stripped stars show a rich forest of emission lines spanning not only the extreme UV where the emission peaks, but also the near UV, optical and near infrared which are accessible by ground-based facilities.
At lower metallicity the stripped stars have lower mass loss rate and lower surface temperatures, which has implications for the spectral signatures. The emission lines decrease in strength with decreasing metallicity. This can be seen most clearly in the fig:spectra_Z, which shows the normalized spectra with inset panels that zoom in on the most prominent lines. Most of these lines are recombination lines (see e.g. panels a, c and e in fig:spectra_Z). The strong lines we pointed out for our reference model in sec:M12_CMFGEN remain the most important lines, but with lower metallicity they decrease in strength and at Z ⩽ 0.0002 they turn into absorption lines. We also see changes in the shape of the line. We find a sequence changing from absorption lines into P Cygni profiles. The Lyα and HeII λ1215 blend in panel a of fig:spectra_Z is an example. In other cases we find absorption lines changing into emission profiles with increasing mass loss rate; see for example the HeII λ4686.
§ DISCUSSION AND IMPLICATIONS
A full assessment of the implications will require larger model grids spanning different initial masses, periods, mass ratios and a more extensive exploration of the uncertainties. However, based on the insight resulting from the exploratory calculations presented here, we foresee several implications. We discuss them briefly in this section and speculate on the basis of the very simple estimates that we can make at present.
§.§ Budget of ionizing photons and implications for cosmic reionization
Quantifying the budget of ionizing photons produced by stellar populations is of wide interest for a variety of applications. These range from cosmological simulations that assess the reionization of the intergalactic medium (IGM), spectral synthesis simulations used to understand the properties of the strong emission lines of galaxies at intermediate to high redshift and HII regions nearby.
Massive Wolf-Rayet (WR) stars are the stars in stellar populations that emit ionizing photons with the highest rate. We explored the properties of stripped stars that are only produced in binaries. We showed that these stripped stars exhibit high effective temperatures for all metallicities we considered. They emit the majority of their flux as HI and HeI ionizing photons.
To provide a first ballpark estimate of the relative emission rate of stripped stars, we compare our models with a typical WR star. Stripped stars are the lower mass counterpart of WR stars as they are also the stripped helium core of a massive star. Wolf-Rayet stars are more luminous compared to stars stripped in binaries. However, WR stars have shorter lifetimes and they are not favored by the stellar initial mass function. For the typical values for a WR star, we used the WC model by <cit.>, which corresponds to an initial mass M_WR, init = 60 single star. This star spends about Δ t_WR∼ 0.4 Myr in the WR phase, during which it emits Q_0, WR∼ 2.8×10^49 HI ionizing photons per second. We assumed all stars with this initial mass become WR stars at some point during their lives (f_WR, 60 M_⊙ = 1).
For stripped stars we take our reference model with standard mass loss rate, which has an initial mass of 12. We assumed that a fraction f_strip = 0.33 of stars with this initial mass becomes stripped <cit.>. Using a <cit.> initial mass function, the relative contribution can then be estimated as
η = f_stripf_WR, 60 M_⊙×Q_0, stripQ_0, WR×Δ t_stripΔ t_WR×( M_strip, initM_WR, init) ^-2.35
= 0.331.0×1.19 × 10^48 s^-12.8 × 10^49 s^-1×1.2 Myr0.4 Myr×( 12 M_⊙60 M_⊙) ^-2.35≈ 1.8
With this very simple estimate, we find that the stripped stars produce almost twice the amount of ionizing photons in comparison to the WR stars. More extensive modeling is needed to assess the full contribution for a realistic population, but it seems likely that stripped stars make a significant contribution to the total budget of ionizing photons.
An accurate estimate would require extensive spectral model grids of stripped stars spanning over mass. This is beyond the scope of the paper, but a full grid will be presented in Götberg et al. (in prep.). However, a boost of only a factor of a few would, in principle, be enough to complete cosmic reionization by redshift 6–7 <cit.>.
The most promising aspect of stripped stars, as potential contributors of the photons needed for reionization, is that they emit them with a time delay. The progenitor star first has to complete its main-sequence evolution, which takes about 20 Myr for the model presented here. Allowing for different progenitors with different masses and lifetimes we expect that the boost of ionizing photons comes with delay times ranging from a few to at least 100 Myr or more (Götberg et al. in prep., see also predictions by ). Observations of nearby star clusters of this age suggest that this is sufficient time to remove most of the remaining gas <cit.>. Numerical simulations also indicate that large local and temporal variations of the escape fractions are possible as feedback of massive stars removes gas and clears lines <cit.>.
This suggests that the slightly delayed photons produced by stripped stars have a much larger chance to escape and become available to ionize the intergalactic medium. Quantitatively assessing the impact of this requires reliable estimates of the escape fraction, which are not available at present. A boost of an order of magnitude in the escape fraction of ionizing photons for stripped stars does not appear to be unreasonable in light of the findings in the high resolution simulations presented by <cit.>, who report temporal variations in the escape fraction that reach up to six orders of magnitude.
Finally, we note briefly that binary stars have other ways to change the spectral energy distributions of stellar populations, apart from producing stripped stars as we discussed here. Substantial contributions are expected from mass gainers and stellar mergers, which effectively repopulate the upper end of the initial mass function as blue stragglers <cit.>. The BPASS simulations suggest that these also make an important contribution <cit.>.
Furthermore, a subset of binaries remain bound after the first star ends its life as a neutron star or black hole. A fraction of these evolve through an X-ray binary phase. A possible importance of their contribution to heating and reionization of the intergalactic medium (IGM) at high redshift has also been considered by various authors <cit.>. However, <cit.> have concluded that the contribution of X-ray binaries to the ionization of the bulk IGM is negligible.
§.§ Observability of stripped stars and strategies to find these stars
Despite the expectation that a large fraction of massive stars interact in a binary system during their lifetime and produce a long-lived stripped star, only very few systems have been detected so far <cit.>. This apparent paradox may simply be the result of biases and selection effects <cit.>, but this has not been quantified. Our simulations provide insight into these biases and can guide efficient observing campaigns that aim to increase the observed sample of stripped stars.
The main challenge is that hot, stripped stars do not appear in isolation. They still reside in orbit around their companion star, which typically dominates the optical and UV flux heavily. This is illustrated in fig:SED_comp, where we show the SED of our reference model (blue shading) together with the spectra of three possible main sequence companion stars, roughly spanning the range of companions that may be expected after interaction. The companion spectra come from the ATLAS9 models <cit.>; see tab:Kurucz for an overview of the adopted parameters. The companion models roughly correspond to evolutionary tracks of a 20, 11, and 3 star halfway central hydrogen burning (defined as X_H,c = 0.5). Their properties are still close to their zero-age main sequence properties, as we expect for relatively unevolved or rejuvenated companions.
One possible strategy is to detect stripped stars through the UV excess that would be detected in otherwise apparently normal, main sequence stars. However, as fig:SED_comp shows, the emission of stripped stars peaks in the extreme UV, which is not accessible from the ground or space with present day observing facilities [The only mission that systematically explored the extreme UV is the all-sky Extreme Ultra Violet Explorer (EUVE; operational in 1992–2001)]. The UV regime between 912-3200 Å is accessible to present-day facilities or can be mined in the FUSE, GALEX and IUE archives. Considering the UV excess alone, fig:SED_comp shows that it is challenging to detect the stripped star considered here if the companion is a O9V or B1V star. However, the UV excess can be used to search efficiently for hot star companions orbiting lower mass stars with later spectral types. For example, our stripped star heavily dominates in UV flux over the B8V companion shown in fig:SED_comp. For comparison, the companion of the observed system HD 45166, which is the only clear case of an identified stripped star in this mass regime, is indeed a later type (B7V) star with a mass of about 5 <cit.>; this is consistent with the trend shown in fig:SED_comp.
Another promising strategy is to search for the emission lines of stripped stars. Our models suggest that for certain combinations of stripped stars plus main sequence companions, these emission lines may be visible above the continuum of the companion. The most promising optical line is the HeII λ4686 line. Further lines of interest are the HeII λ1640 line in the UV and a sequence of strong lines in the near IR, although the overall drop of intensity may make these more challenging to use. The strength and shape of the emission lines depend on the uncertain mass loss rates and terminal wind speeds. A search for these emission features recovers only a subset of the population of stripped stars, with the mass ranges and orbital companions dictated by the actual values of the mass-loss rates and wind terminal speeds of stripped stars.
Further strategies include searches for radial velocity (RV) variations and eclipses. These RV searches are challenging since the expected variations for the brighter and more massive companion star are typically small and its lines are likely broadened by rotation <cit.>. The RV variations in the possible emission lines of the companion would be larger, but they would need to be detected first. Binary systems hosting a stripped star that shows eclipses should be rare because of the small radii of stripped stars (∼ 1). Moreover, the orbit is expected to have widened as a result of previous mass transfer. However, they should appear in sufficiently large optical transient surveys, such as the Optical Gravitational Lensing Experiment <cit.>. Multicolor eclipse data should allow identification of eclipses caused by a hot companion.
Finally, the strong EUV radiation of stripped stars could potentially be observed indirectly through emission lines in nearby gas that requires ionization by high-energy photons. The characteristic emission lines provide a diagnostic of the hardness of the ionizing source, which should in principle allow to differentiate between the presence of a stripped star or other ionizing sources such as a (accreting) neutron star or black hole. Similarly, the stripped stars may have potentially observable effects through irradiation of their companion. For example, they may induce differences between the day and night side of companion or they may partially ionize the disk of their companion, if present. Further simulations of the expected consequences are warranted.
§.§ Failure to remove the H-rich envelope at low Z (I) – implications for the ratio of SN type Ibc, IIb and II
One of our key findings is that the Roche-lobe stripping process is inefficient at low metallicity. A thick layer of hydrogen-rich material is left at the surface of the stripped star when it contracts within its Roche lobe and mass transfer seizes. As discussed in sec:evol_Z_rlof, we believe this is mainly the result of the lower opacity in the layers below the surface in these models.
This has potentially important implications for the final surface composition of supernova progenitors <cit.>. Roche-lobe stripping is considered as the main progenitor channel leading to type Ibc supernova, i.e. core collapse supernova that do not show signatures of H in their spectra <cit.>. At high metallicity Roche-lobe stripping removes effectively the entire H-rich envelope, for the orbital periods that we considered here. The small amount that remains is subsequently removed by the stellar wind before the stars explode. We thus expect higher metallicity stripped stars to end their lives as a Ibc supernovae, which is consistent with earlier works <cit.>.
However, we find very different results for low metallicity. For our metal-poor models we find that the first Roche-lobe overflow event fails to remove the last ∼ 1 of the envelope, which contains a mixture of hydrogen and helium. The total mass of pure hydrogen is ∼ 0.3 at this stage. How much hydrogen is left at the final explosion, depends on whether or not the system evolves through a second mass transfer event during the helium shell burning phase. This depends on the initial orbital period and on the mass of the stripped star. We adopted a relatively short orbital period in our simulations and we find that most, but not all, of the hydrogen is removed during a second mass transfer. The final remaining hydrogen masses in our models are given in tab:hestar_prop. Our most metal-poor model evolves through a second mass transfer phase and has M_H,tot = 0.03 left at the moment of explosion. This amount of hydrogen is small, but could be detected in early-time spectra <cit.>, thus suggesting that the supernova could be classified as type IIb, if detected early enough. We refer to <cit.> for a discussion of the effect of mass and period.
The general prediction from our models is thus that the fraction of type Ibc decreases for lower metallicity, while the fraction of type IIb rises, a conclusion that is also drawn by <cit.>.
Determining these fractions observationally has proven to be challenging, since most surveys are biased to more massive galaxies, introducing a bias toward metal-rich stellar populations <cit.>. The most careful and comprehensive present-day analysis appears to be the work by <cit.>. In the most recent observational results a reduction of type Ibc rate is found for less massive galaxies. Given the general galaxy mass-metallicity relation this may indicate that type Ibc are less prevalent at lower metallicity <cit.>, consistent with what our models predict. The type IIb rate appears to be unaffected. This conclusion differs from the findings by earlier investigations, which showed a increase of the type IIb rate in dwarf galaxies <cit.>.
§.§ Failure to remove the H-rich envelope at low Z (II) – Implications for rapid population synthesis simulations including gravitational wave predictions
Our results also have possibly important implications for the variety of rapid population synthesis simulations that rely on the fast but approximate algorithms by <cit.>. In these simulations stripped stars are approximated using evolutionary simulations for pure helium stars that were computed by <cit.>. Our simulations show that this approximation is fairly accurate for high metallicity, but it breaks down at low metallicity.
These algorithms form the basis of a large number of simulations that are used for a variety of predictions. This include results obtained with the StarTrack code <cit.>, binary_c <cit.>, and COMPAS <cit.>.
The remaining hydrogen-rich envelope layer can potentially affect the further evolution of metal-poor binary systems. Further simulations will be needed to investigate this in more detail, but we can expect that a larger fraction of systems experience a second mass transfer stage when the stripped stars swells up during helium shell burning. The second phase of mass transfer occurs in our models when Z ≤ 0.0047, but this will depend on the adopted mass and orbital period. This may potentially be of interest in channels where the companion is an accreting white dwarf <cit.>. It could in principle affect the supernova type Ia rates through the single degenerate formation channel <cit.>. If such systems enter a common envelope stage they may produce very tight binary systems that are of interest as gravitational wave sources. For gravitational wave sources in particular, current simulations predict the majority of sources to arise from metal-poor stellar populations, where we expect the effects to be largest.
§ SUMMARY AND CONCLUSIONS
We investigated the effect of metallicity on stars that lose their hydrogen-rich envelope through interaction with a companion. For this purpose we used the detailed stellar evolutionary code MESA and the atmosphere code CMFGEN. Our findings apply to a typical massive binary (where the primary star has an initial mass of 12) that fills its Roche lobe after leaving the main sequence but before the completion of helium burning that avoids coalescence. We summarize our main results and their implications.
* In agreement with earlier work, we find that Roche-lobe stripping exposes the helium core of the donor star and produces very hot and compact stars (80 000 K and ∼ 1 in our solar metallicity reference model). These stars fill the gap in mass between their higher mass counterparts, known as Wolf-Rayet stars, and their lower mass counterparts, subdwarf O and B stars. The stripped stars considered here are not expected as a result of single star evolution: they are uniquely produced as a result of binary interaction.
* For single stars it is a well-known fact that lowering the metallicity results in hotter and more compact stars, at least in the early evolutionary phases. Surprisingly, we find the opposite for stripped stars. At lower metallicity, mass donors shrink within their Roche lobe before the removal of the hydrogen-rich envelope is complete. We believe that this is due to the reduction of the opacity in the subsurface regions. The result is that metal-poor stripped stars are larger, more massive, more luminous, slightly cooler, and shorter lived than their metal-rich counter parts.
* Stripped stars are very efficient sources of ionizing photons. Despite losing about two-thirds of their mass, the bolometric luminosities of stripped stars are comparable to their pre-interaction main sequence progenitors. However, most of the light is emitted in the extreme UV at wavelengths that are inaccessible by ground- and space-based telescopes. Our reference model emits 85% of its luminosity at wavelengths blueward of the Lyman limit for H ionization and 60% blueward of the threshold to singly ionize helium. The flux at shorter wavelength is very sensitive to uncertainties in the mass loss rate. A corresponding single star of the same initial mass does not emit any significant number of ionizing photons during its helium burning phase. The number of ionizing photons is mildly dependent on metallicity. The HI and HeI ionizing photons vary by less than a factor of two among our models.
* Stripped stars are not as luminous as massive Wolf-Rayet stars, which are emitting ionizing photons at the fastest rates in stellar populations. However, using a simple estimate we argue that they produce a comparable amount of ionizing photons. Stripped stars are favored by several effects. (a) They are the product of lower mass stars, which are favored by the initial mass function. (b) Their lower mass loss rates, give them relatively transparent atmospheres with a photosphere that lies close to their very hot stellar surfaces. This allows them to produce very hard radiation. (c) They evolve more slowly than their higher mass counterparts and spend a longer time in the phase during which they produce ionizing photons (about 1 Myr for the models considered here). (d) The ionizing photons are emitted with a long time delay after starburst, in contrast to those emitted by the short-lived massive Wolf-Rayet stars. This is interesting since it allows time for feedback from massive stars to remove most of the surrounding gas of the birth clouds that could trap the ionizing photons. We may thus expect a larger fraction of ionizing photons to escape and become available to ionize the intergalactic medium. A full assessment will require larger model grids.
* Our models predict that high metallicity stripped stars have strong emission lines, a prediction that is robust against variations of a factor of three in the adopted wind mass loss rate. The strongest optical spectral feature of stripped stars is the HeII λ4686 emission line. The HeII λ1640 and Hα show similar strengths. Other characteristic features are the CIV λ1548 and 1551 doublet and numerous emission lines of NIV and NV. These lines provide a promising way to identify stripped stars in the vicinity of their optically bright companion stars.
* Our finding that Roche-lobe stripping fails to remove the complete H-rich envelope of metal-poor stars has implications for the further sequence of interaction phases that a binary system may undergo. Our results are in stark contrast with the approximate treatment of stripped stars in widely used rapid binary population synthesis algorithms, where stripped stars are treated as pure helium stars. This is a fair approximation at high metallicity, but it breaks down at lower metallicity. This is a concern, especially for simulations for gravitational wave sources. These predict a dominating contribution from low metallicity, where we find the largest discrepancy. Follow-up studies of the implications are warranted.
* Our results also have implications for the diversity supernova subtypes and in particular whether hydrogen is expected to be present in the final spectra. At high metallicity the combined effects of Roche-lobe stripping and winds remove the remaining hydrogen, as already pointed out in earlier work. At low metallicity, this is not true. After completing central helium burning, our lower metallicity models expand and fill their Roche lobe a second time. Traces of hydrogen are expected to be visible in the spectra of these supernova when they explode. This is consistent with the observationally derived decrease of type Ibc supernova in lower mass galaxies.
We conclude that advancing our understanding of stripped stars is of wide interest because of the many implications they have for astrophysics, ranging from questions about ionizing photons to formation scenarios for gravitational sources. It is humbling to realize that it has now been more than half a century since the first numerical simulations of this type were performed. At this moment we have the luxury of an improved understanding of the microphysics and increased computational resources. For this study we explicitly chose to limit the extent of the parameter space, not because of the limitations in computational power, but because of the richness of processes that required discussion and deeper understanding. However, to move forward from here, prioritization of efforts on the computational and observational side are needed. This will allow us to explore the physical processes and how they behave in the large parameter space and to increase the observed sample that can be used to verify and test the models.
YG acknowledges first and foremost Emmanouil Zapartas and Mathieu Renzo for the stimulating daily interaction and discussions that were essential for physical understanding, Pablo Marchant for his input concerning MESA and for making his scripts for Kippenhahn diagrams available, María Claudia Ramírez Tannus and Frank Tramper for their expertise in spectral models, Martin Heemskerk for providing computing expertise and support throughout the project and Alessandro Patruno for allowing us to use the Taurus computer.
The authors further acknowledges Colin Norman, Hugues Sana, Norbert Langer, Tomer Shenar, Paul Crowther, Nathan Smith, Douglas Gies, Abel Schootemeijer, Miriam Garcia, Onno Pols, Sung-Chul Yoon, JJ Eldridge, Elizabeth Stanway and Ed van den Heuvel for many stimulating discussions that have helped to shape this paper. We further acknowledge the referee for her/his time and helpful comments.
YG acknowledges Geneva Observatory for the hospitality and inspiring scientific environment during a collaboration visit and to Anna Watts for financial support of this visit through her Aspasia grant. Part of the simulations were conducted on the LISA computing cluster at surfSARA. SdM acknowledges support by a Marie Sklodowska-Curie Action (H2020 MSCA-IF-2014, project id 661502).
This research was supported in part by the National Science Foundation under Grant No. NSF PHY-1125915.
aa
§ CONNECTION BETWEEN THE ATMOSPHERE AND STRUCTURE MODELS FOR VARYING METALLICITY
In sec:tailor we describe how we construct CMFGEN atmosphere models for the stellar structure models computed with MESA. Here, we provide additional plots analogous to fig:stitch_M12 to show the connection for the temperature and density structure. The top, middle, and bottom panels of fig:stitch_Z, show our Z = 0.0047, Z = 0.0021, and Z = 0.0002 models, respectively. Within each panel, we show the connection for the standard mass loss rate together with a mass loss rate that is three times enhanced and three times reduced.
The differences in the estimated stellar radii from the MESA and CMFGEN models translate into temperature differences. The largest difference is ∼ 200 K for the Z = 0.0002 model, which is negligible compared to the surface temperature (∼50 000 K). This is accurate enough to reliably predict the spectral properties.
§ IMPACT OF PARAMETER VARIATIONS
§.§ Wind speed and clumping variations
For the atmosphere models presented in this work we adopted a terminal wind velocity v_∞ = 357motivated by the measurements for the stripped star in the HD 45166 system. This value is small in comparison with the typical values measured for the higher mass counterparts, WR stars, which commonly have v_∞ = 1 300<cit.>. We investigated the impact of changes in this assumption by computing an atmosphere model equivalent to our standard model, but instead assuming v_∞ = 1 300 K and an increased effect of wind clumping by setting the wind clumping volume filling factor to 0.1.
The spectral energy distribution is not affected by the changes in these assumptions. However, the spectral features broaden and stand out less strongly above the continuum, as can be seen in fig:spectrum_vinf.
This would make it harder to identify the features of stripped stars when they are accompanied by an optically bright main sequence companion. However, the prominent HeII λ4686 feature shown in panel e) is still about seven times stronger than the continuum, despite spanning over about 30 Å.
The example shown considers a more WR-like set of parameters compared to the standard model. It is also interesting to consider spectral models with low wind speed and increased clumping or high wind speed and less clumping. In the case of low wind speed and increased clumping we expect the spectral features to be stronger compared to the standard model as the increased wind clumping makes the atmosphere seem more optically thick. The lines would however remain narrow. For a model with instead high wind speed but less clumping we expect broad features, but potentially more P Cygni profiles compared to the model with high wind speed and more clumping presented in this subsection. This because less clumping makes the wind seem optically thinner.
§.§ Other uncertain parameters
Here we list several other uncertain parameters, which have not been investigated in detail in this work. We explain how these could potentially impact the appearance of the spectra.
* Composition not scaled according to solar. The relative metal mass ratios may vary with metallicity, in particular over cosmic time. A larger amount of CNO for a specific iron abundance would increase the spectral features of CNO, but not affect the emitted ionizing flux significantly.
* Initial helium mass fraction. The initial helium mass fraction might vary between environments of the same metallicity. Such a change would affect the compactness of the stars throughout the evolution and also affect the size of the helium core after the main sequence. This has direct implications for the luminosity and thus ionizing flux of the stripped stars.
* Including other Fe-group elements in spectral modeling. To include more elements from the Fe-group could increase the line blanketing in the extreme UV and in such a way reduce the flux of ionizing photons.
|
http://arxiv.org/abs/1701.07670v1 | 20170126122442 | On the detection of primordial gravitational waves produced in bouncing models | [
"Nelson Pinto-Neto",
"Arthur Scardua"
] | gr-qc | [
"gr-qc"
] |
[]nelsonpn@cbpf.br
[]arthur@cbpf.br
ICRA - Centro Brasileiro de Pesquisas Físicas – CBPF,
rua Xavier Sigaud, 150, Urca,
CEP22290-180, Rio de Janeiro, Brazil
It is widely known that bouncing models with a dust hydrodynamical fluid satisfying c_s^2=p_d/ρ_d≈ 0,
where c_s, p_d, ρ_d are the sound velocity, pressure and energy density of the dust fluid, respectively,
have almost scale invariant spectrum of scalar perturbations and negligible primordial gravitational waves.
We investigate whether adding another fluid with 1/3 < λ = p/ρ < 1, which should dominate near
the bounce, can increase the amplitude of gravitational waves in the high frequency regime, turning them
detectable in near future observations for such range of frequencies. Indeed, we show that the
energy density of primordial gravitational waves is proportional to k^2(9λ-1)/(1+3λ)
for wavelengths which become bigger
than the Hubble radius when this extra fluid dominates the background. Hence, as λ→ 1 (an
almost stiff matter fluid), the energy density of primordial gravitational waves will increase faster
in frequency, turning them potentially detectable at high frequencies. However, there is an extra factor
I_q(λ) in the amplitude which decreases exponentially with λ.
The net effect of these two contributions
turns the energy density of primordial gravitational waves
not sufficiently big at high frequencies in order to be detected by present day or near future observations
for models which satisfy the nucleosynthesis bounds and is symmetric with respect to the bounce. Hence,
symmetric bouncing models where the background is dominated by a dust hydrodynamical fluid with small
sound velocity, do not present any significant
amount of primordial gravitational waves at any frequency range compatible with observations, even
if there are other fields present in the model dominating the bounce phase. Any detection
of such waves will then rule out this kind of models.
04.30.-w, 98.80.-k, 98.80.Jk
On the detection of primordial gravitational waves produced in bouncing models
Arthur Scardua
December 30, 2023
==============================================================================
§ INTRODUCTION
Cosmological bouncing scenarios solve, by construction, the singularity problem present in
the standard cosmological model <cit.>. As a bonus, they solve other puzzles of the standard
cosmological model, like the horizon and
flatness problems, and they can also supply a mechanism to generate
primordial cosmological perturbations from quantum vacuum fluctuations,
with almost scale invariant spectrum <cit.>, as in
inflationary models <cit.>,
when the contracting phase is mainly dominated by a matter fluid (a fluid with
equation of state p=λρ with λ≈ 0). Hence, they can also be viewed as alternatives to
inflation, although they are not necessarily contradictory to it.
There are nowadays many mechanisms to generate the bounce, normally they involve
new physics and/or new types of fields. There are
also many open questions and issues to be investigated concerning these
models, for reviews see Refs. <cit.>. One of
these questions concerns the presence of primordial gravitational waves. In the case
where the fluid driving the contracting phase is a canonical scalar field,
in which the sound velocity of scalar perturbations c_s is equal to the speed of light, c_s = c=1,
the production of primordial gravitational waves is usually very high <cit.>,
yielding a tensor to scalar perturbation ratio r=T/S ≈ 1 which is incompatible
with observations <cit.> (T and S are the amplitudes of tensor and scalar perturbations,
respectively). On the other hand, for K-essence scalar fields, which mimic hydrodynamical fluids and
c_s = λ≈ 0,
the amplitude of primordial gravitational waves produced is very small <cit.>,
and they cannot be seen in any band of frequency. This feature is compatible
with present cosmological observations, but
it does not offer any testable prediction into which this model could be confronted with future observations.
As it is well known, the detection of gravitational waves emitted by black holes <cit.>
opened the gravitational waves astronomy era.
One of the possible signals to be detected in different frequency ranges in the next decade,
away from cosmological scales,
are precisely the primordial gravitational waves. Unlike the black hole collision signals
recently detected, these primary waves are stochastic and less intense. The detection of such waves will give
information about the early Universe <cit.>, e.g., if there was an inflationary era, a bounce,
or even both.
The aim of this paper is to investigate whether high energy modifications of the
model described in Ref. <cit.>, a Universe containing radiation and dust which goes
through a quantum bounce, can increase the amplitude of primordial gravitational waves in the high frequency regime. In fact, the energy density of gravitational waves has a spectrum proportional to
f^2(9λ_c-1)/1+3λ_c ,
where f is the frequency and λ_c is the equation of state parameter of the fluid which is dominating the background when the mode is leaving the Hubble radius. Hence, for modes leaving the Hubble radius at the dust dominated phase, it decreases with frequency as f^-2, and it increases as f^2 for modes leaving the Hubble radius at the radiation dominated phase. If one adds to the model a stiff fluid with λ≈ 1, which should dominate its densest phase, so dense that that the sound velocity of the fluid becomes comparable with the speed of light <cit.>, then for modes leaving the Hubble radius at the stiff matter dominated phase, the energy density of gravitational waves would increase with frequency as f^4. Our goal is to evaluate whether adding this stiff fluid to the model can sufficiently increase the energy density of gravitational waves in the high frequency regime in a way that they could be detected by future observations, without spoiling the good features of the model (scale invariant spectrum of scalar cosmological perturbations, standard nucleosynthesis phase, etc).
The paper is divided as follows: in the next section we derive the equations for tensor perturbations when the background is quantized, in section III we describe the features of the full background model, and the qualitative features of the evolution of tensor perturbations in this background. In section IV we solve the tensor perturbation equations semi-numerically, and we present analytical approximations in order to understand the numerical results qualitatively. We finish in section V, with the conclusions.
§ RELIC GRAVITONS IN A QUANTUM BOUNCING MODEL
In this section, we first derive the evolution equations for tensor perturbations
in a bouncing cosmological model near the bounce itself, where the background evolution
is dominated by a single perfect fluid with equation of state p=λρ, and
the bounce is caused by quantum effects described in the framework of quantum cosmology
interpreted along the lines of the de Broglie-Bohm quantum theory, see
Refs. <cit.> for details. Note that the usual Copenhagen point of
view in quantum mechanics cannot be used in quantum cosmology as the whole Universe is being quantized, including
observers themselves, see Ref. <cit.> for a review on this subject.
The de Broglie-Bohm quantum theory assumes the existence of an objective reality, where
positions of particles and/or field amplitudes have definite values, independently of any observation.
It is an explicit non-local realistic quantum theory which satisfies all experimental tests already made in quantum
systems. The Bohmian trajectories describing the scale factor evolution in this framework are calculated, and they are usually non-singular, presenting a bounce due to quantum effects at small scales, and turning to a classical standard evolution when the scale factor becomes sufficiently large. Subsequently, we enlarge the model in order to include dust and radiation, which however are not important during
the bounce itself, and are relevant only when the evolution is classical.
The action we consider describing the physics around the bounce contains an Einstein-Hilbert term
and a single perfect fluid term described by the Schutz formalism <cit.>:
S = S_GR + S_fluid =
-1/6^2∫√(-g) R ^4 x + ∫√(-g) p^4
x,
where =(8πG_N /3)^1/2 is the Planck length (ħ=c=1),
p is the perfect fluid pressure satisfying p=λρ,
ρ is the fluid energy density, and λ=const. The metric
g appearing in action (<ref>) describes a background
Friedman-Lemaître-Robertson-Walker (FLRW) metric and a
first-order tensor perturbation w_ij. It reads
s^2=N^2(τ)τ^2 -a_
phys^2(τ) (γ_ij+w_ij) x^i
x^j.
The constant curvature background spacelike metric is given by γ_ij. It lowers and raises
the indices of the tensor perturbation w_ij, which is
transverse and traceless (w^ij_ |j=0 and w^i_i=0, the bar indicating a covariant derivative with respect
to γ). N(τ) is the lapse function, and defines the
time gauge, once fixed. From now on we consider only flat spatial metrics.
Action (<ref>) with metric (<ref>) yields, after some suitable
canonical transformations (see Ref. <cit.> for details),
the second-order Hamiltonian
H = N H_0
H_0 = [ -1/4 aP_a^2 +
P__T/a^3ω+∫^3 x (
6Π^ijΠ_ij/γ^1/2 a^3
1/24γ^1/2 a w_ij|kw^ij|k)],
where P_a, Π^ij, P__T are the momenta
canonically conjugate to the scale factor, the tensor
perturbations, and to the fluid degree of freedom, respectively.
These quantities have been redefined in order to be dimensionless, e.g.
a_phys= a/√(V), where V is the comoving volume of
the background spacelike hypersurfaces. The Hamiltonian in Eq. (<ref>)
gives Einstein's equations both at zeroth and first order in perturbation expansion.
No assumption about the background dynamics has been used in order to arrive
at its final form given in Eq. (<ref>).
The Dirac quantization of the background and tensor
perturbations can be implemented by imposing
Ĥ_0Ψ(a,w_ij)=0, where Ĥ_0 is the operator version of
the classical H_0 given in Eq. (<ref>).
The corresponding Wheeler-DeWitt equation is then given by
i∂Ψ/∂ T = Ĥ a^3λΨ≡[ a^3λ-1/4∂^2/∂
a^2 + ∫ d^3 x ( - 6
a^3(λ-1)/γ^1/2δ^2/δ w_ijδ w^ij +
a^3λ+1γ^1/2w_ij|k
w^ij|k/24)]Ψ.
We have imposed the time gauge N=a^3λ, yielding T as the time variable.
Making the separation ansatz for the wave functional
Ψ[a,w_ij,T]=φ(a,T)ψ[a,w_ij,T], Eq. (<ref>) can be split into
two,
i∂φ/∂ T =a^3λ-1/4∂^2φ/∂ a^2φ,
and
i∂ψ/∂ T = ∫^3 x
(-6a^3(λ-1)/γ^1/2δ^2/δ w_ijδ w^ij
a^3λ+1γ^1/2w_ij|k
w^ij|k/24)ψ.
Using the de Broglie-Bohm quantum theory <cit.>,
Eq. (<ref>) can be solved, yielding a Bohmian quantum
trajectory a(T). Using the de Broglie-Bohm framework described in Ref. <cit.>,
the guidance relation reads
da/dT=-a^(3λ-1)/2∂ S/∂ a.
It is in accordance with the usual Hamilton-Jacobi classical relations da/dT={a,H}=
-1/2a^(3λ-1)P_a with P_a=∂ S/∂ a. Note that
S is the phase of the wave function, and it coincides with the classical action,
yielding the usual Hamilton-Jacobi classical trajectories in the classical limit.
Taking the initial normalized gaussian at T=0.
Ψ^(init)(χ)=(8/T_bπ)^1/4exp(-χ^2/T_b) ,
where T_b is an arbitrary constant, the solution of Eq. (<ref>) reads <cit.>
Ψ(a,T) = [8 T_b/π(T^2+T_b^2)]^1/4exp[-4T_ba^3(1-λ)/9(T^2+T_b^2)(1-λ)^2]
×
×exp{-i[4Ta^3(1-λ)/9(T^2+T_b^2)(1-λ)^2
+1/2arctan(T_b/T)-π/4]},
The phase of this wave solution yields he Bohmian quantum trajectory for the scale factor
a(T) = a_b
[1+(T/T_b)^2]^1/3(1-λ),
where a_b is the scale factor at the
bounce at T=0. Note that this solution has no singularities and
tends to the classical solution when T→±∞.
The quantity T_b, together with a_b, gives the curvature scale at the bounce,
L_ bounce≡ T_b a_b^3λ.
Once one has a(T) as a prescribed function
of time, one can perform the time dependent unitary transformation
U = exp{ i
[ ∫ d^3 x γ^1/2a^' w_ij w^ij/2a] }×
×exp{ i [ ∫ d^3 x (
w_ijΠ ^ij + Π ^ij w_ij/2)
ln( √(12)/a) ]},
yielding the following simple form for the Schrödinger equation for the perturbations:
i∂χ(w,η)/∂η = ∫ d^3 x
{-1/2γ^1/2δ^2/δ w^2 +.
.+γ^1/2[1/2w_k w^k
-a^''/2aw^2] }χ(w,η).
We made a transformation to conformal time
η, a^3ω-1 T = η, and
a prime ' denotes the derivative with respect to η.
This is the same Schrödinger equation used in semi-classical
gravity for linear tensor perturbations <cit.>, but the scale factor which appears
in it is the Bohmian trajectory (<ref>), it is not the classical scale factor.
Remember that Eq. (<ref>) was obtained without ever
using the background Einstein's equations. Hence, it can be used when the background
is also quantized, and it can be extended to the classical regime when other fluids
may become relevant. Note that as the Bohmian scale factor a(η) given in Eq. (<ref>) approaches
the classical limit after the bounce, the matching of the classical and quantum phases is straightforward.
Note, however, that a(η) departs from the classical solution near the bounce, and this
fact leads to some different consequences with
respect to the usual semi-classical approach.
In the Heisenberg representation, the equations for the operator
ŵ_ij read
ŵ_ij”+2a'/aŵ_ij' -
ŵ_ij|k^|k
= 0,
which corresponds to the usual equation for quantum tensor perturbations
in classical backgrounds <cit.>.
It is convenient to expand these quantum mechanical operators into Fourier
modes and subject them to quantization rules:
ŵ_ij(x) = √(6)∑_α = +,×∫d^3
k/(2π)^3/2ε^(α)_ij[w^(α)_k(η)
e^-i𝐤·𝐱â_𝐤^(α).
+ .w^∗(α)_k(η)
e^i𝐤·𝐱â_𝐤^(α)†],
where x=(η,𝐱),
ε^(α)_ij=ε^(α)_ij(𝐤̂)
is the polarization tensor for the two graviton polarization
states + and × labeled by α, and satisfies
ε^(α)ijε^(α')_ij=2δ_αα'.
Also, w^(α)_k(η) are mode functions, and
â_𝐤^(α)†,
â_𝐤^(α) are the usual creation and annihilation
operators, respectively. Such operators satisfy the equal-time
commutation relations
[â_𝐤^(α),â_𝐤'^(α')†]
= δ_αα'δ^(3)(𝐤-𝐤'),
[â_𝐤^(α),â_𝐤'^(α')]
=
[â_𝐤^(α)†,â_𝐤'^(α')†]
= 0,
and the quantum vacuum is defined by
â_𝐤^(α)| 0 ⟩ = 0.
Inserting the above Fourier expansion into Eq. (<ref>),
we obtain the mode equation
w^(α)''_k +
2a'/aw^(α)'_k + k^2
w^(α)_k = 0.
Introducing the canonical amplitude v^(α)_k as
v^(α)_k≡ a w^(α)_k,
the mode equation (<ref>) becomes
v_k^(α)”+ (k^2
-a”/a)v^(α)_k =0,
for each graviton polarization state. From now on, we will omit the index α.
We will also impose vacuum initial conditions when η→ - ∞ and a→∞)
v_k(η→-∞)=e^-i k η/√(2 k)
One is now able to evolve the gravitational wave mode equation from the initial condition (<ref>) to its amplitude today. The quantum and classical behaviors will impact over the evolution through the potential a”/a.
In the background model studied in Ref. <cit.>, the fluid dominating at the
bounce was radiation, λ=1/3, with an additional
dust fluid dominating when the Universe was large, in order to furnish an almost scale invariant spectrum of
scalar cosmological perturbations. As explained in the introduction, we will now investigate the situation where
the bounce is dominated by an extra fluid with 1/3 < λ < 1, together with dust and radiation,
which are not relevant
near the bounce. In the next section we will describe the full background model and its qualitative features.
§ THE FULL BACKGROUND MODEL
The present model contains three non-interacting perfect fluids: dust, radiation, and a fluid satisfying p=λρ, with 1/3 < λ < 1, usually with λ≈ 1, which we call almost stiff matter (asm). The dust fluid controls the dynamics of the Universe when it is large, and the asm dominates its dynamics near the bounce, when the curvature scalar reaches its highest values[The curvature scale is proportional to the inverse of the square root of the curvature scalar.], and the Universe moves from the contracting to the expanding phase. The radiation fluid dominates in between these two fluids. When the curvature scale approaches the Planck length scale, the scale factor gets near its smallest value a_b, and quantum effects realize the transition between contraction to expansion, the bounce. This quantum phase is dominated by the asm fluid.
The radiation and dust fluid model massless or ultra-relativistic massive fields, and cold massive fields, respectively. The asm fluid can represent the content of the Universe when it was so dense
that the sound velocity of the fluid becomes comparable with the speed of light <cit.>.
In order to satisfy cosmological observations and the model hypotheses, there are some constraints the asm fluid must fulfill[Imposing these constraints will limit the amplification of gravitational waves in the asm era, as we will see.]:
* The quantum effects must be restricted to the asm dominated phase;
* Radiation must dominate during nucleosynthesis;
* There must be a classical region between asm and radiation.
As shown in Fig. (<ref>), the Universe had a contracting phase in the past, when it was almost flat and very homogeneous. The inhomogeneities were generated by quantum vacuum fluctuations at this phase, and amplified afterwards. The tensorial quantum stochastic fluctuations generated in this contracting past were the sources of the primordial stochastic gravitational waves which could be observed today[As they are stochastic, there is no coherent time-dependent signal
that could be detected using a match-filtering method as used in the first direct detection of gravitational waves <cit.>].
Waves with different frequencies will have different amplifications, depending when their wavelengths becomes bigger than the Universe curvature scale. When they are smaller, they do not feel the curvature of the Universe and they oscillate as free fields in flat space-time. When their wavelengths become bigger than the curvature scale, they are pumped by the gravitational field, and they get amplified. Fig. (<ref>) shows a comparison between the co-moving wavelength λ=1/k and the co-moving curvature scale |a/(a”)|^1/2 along the history of the Universe.
This amplification changes
according to which fluid dominates the dynamics of the background when the crossing occurs. Hence, we
expect to obtain different dependence of amplitude with frequency for each different fluid domination.
The background model have two regimes. A classical and a quantum regime. In the classical one, the Friedmann equation relates the scale factor a and the conformal time η through the equation
ȧ= Sign(η) H_0 √(Ω_r+Ω_d a+Ω_λ a^(1-3λ)) ,
where Ω_i ≡ρ_i/ρ_c, i=r,d,λ and ρ_c is the critical density today; H_0 is the Hubble factor today; and λ is the fluid parameter of asm, i.e., p_asm=λρ_asm. We set
a_ today≡ a_0=1.
The critical densities must satisfy the constraints of observation: the equality between radiation and dust must occur in the redshift 2740, and asm must dominate earlier than the nucleosynthesis era, which occurs at redshift 10^7 <cit.>:
Ω_r =Ω_d 1/1+z_e
Ω_r >Ω_λ(1/1+z_n).
In the quantum regime, Eq. (<ref>) yields
ȧ=Sign(η) H_0 √(Ω_λ a^1-3λ[1-(a_b/a)^3(1-λ)]).
There is a period when both Eq. (<ref>) and Eq. (<ref>) are valid, dominated by a classical asm, which happens when
(a_b/a)^3(1-λ)≪ 1.
Let us take (a_b/a)^3(1-λ)<1/100≪ 1. Equality between asm and radiation happens for the scale factor
(Ω_λ/Ω_r)^1/3λ-1.
Then we get,
a_b 10^2/3(1-λ) < a < (Ω_λ/Ω_r)^1/3λ-1 < a_n,
where a_n is the scale factor at the nucleosynthesis era.
Equation (<ref>) constrains Ω_λ with respect to the scale factor in the bounce a_b, and the fluid parameter λ. Because of this equation, the stiffness of the fluid is limited to
λ < 1- 2/3 Log_10(a_n/a_b).
The amplitude of gravitational waves satisfies the wave Eq. (<ref>)
v_k^”+( k^2
-a”/a)v_k =0,
where the potential takes the form
a”/a=H_0^2/2[Ω_d/a-(3λ-1)Ω_λ/a^3λ+1] Classical
a”/a=α^2 (ab/a)^4[1-3λ-1/2(a_b/a)^3(λ-1)] Quantum,
where
α^2≡H_0^2 Ω_λ/a_b^1+3λ
The behavior of the potential is shown in Fig. (<ref>). Two maxima are classical due to the transition radiation-asm, one in each bounce side. The two minima come from the quantum regime, and the highest peak happens in the bounce.
§ NUMERICAL SOLUTIONS AND ANALYTICAL APPROXIMATIONS
For a better understanding on how the different fluids present in the model control the amplitude of gravitational waves, it is necessary a semi-analytical approach. Such approximation can be done separating the evolution in three regions, as shown in Fig. <ref>.
(A) Outside the potential, or inside the curvature scale: k≫a”/a
(B) Inside the potential, or outside the curvature scale: k≪a”/a
(C) Outside the potential again, or re-entering the curvature scale: k≫a”/a
There are regions in B where k>a”/a, but they are negligible.
In A and C, the solutions are oscillatory. Using the quantum initial condition given in Eq. (<ref>), we get
v(η) =e^-i k η/√(2k), in (A)
v(η) =C_1 e^-ikη+C_2 e^i k η, in (C).
In (B), the zero order term neglecting k reads
v(η)=a(η) [B_1+B_2 ∫^η_-η_cdη̅/a^2 (η̅)],
where -η_c denotes the conformal time when k^2=|a”/a| in the contracting phase, η=0 is the bounce conformal time, and η_c is the conformal time when the solution exits the potential again (the potential a”/a is symmetric). The constants can be obtained through matching conditions, and read,
B_2 =(av'-va')|_-η_c
B_1 =v(-η_c)/a(-η_c).
From now on, a_c=a(η_c)=a(-η_c) and a'_c=|a'(η_c)|=a'(η_c)=-a'(-η_c).
The constants in Eq. (<ref>) and Eq. (<ref>) are, using Eq. (<ref>)
B_1 =e^-ikη_c/a_c √(2k)
B_2 =e^ikη_c/a_c√(2k)(a'_c a_c-ik a_c^2).
Therefore, Eq. (<ref>) can be expressed as
v(η_c)=e^i k η_c/√(2k)[1+(a'_c a_c-ik a_c^2) I(a_c)],
where
I(a_c)=∫_-η_c^η_cdη/a^2(η)=2 ∫^a_c_a_bda/a^2 |a'(a)|.
Using the fact that B_2 is constant, the derivative in the region B can be expressed as
v' =B_2/a+v a'/a
⇒ v'(η_c) =e^ikη_c/√(2k)[a'_c/a_c(2+a'_c a_c I(a_c))-ik(1+a'_c a_c I(a_c))].
With the functions v and v' in region B determined, the constants present in the function in region C are
C_1 =[v(η_c)+v'(η_c)/-ik] e^i k η_c/2
C_2 =[v(η_c)-v'(η_c)/-ik] e^-i k η_c/2.
The critical energy of gravitational waves <cit.> when the waves reenter the curvature scale
is then given by,
Ω_g ≃k^5 l_p^2/3 π^2 H_0^2(|v|^2+|v'/k|^2)=
2k^5 l_p^2/3 π^2 H_0^2(|C_1|^2 + |C_2|^2)
=k^4 l_p^2/3 π^2 H_0^2[ 2+ 4 a'_c a_c I(a_c)+ a'^2_c a^2_c I^2(a_c)+k^2 a_c^4 I^2(a_c)+.
.+a_c^'2/a_c^2 k^2(4+4a'_c a_c I(a_c)+a^'2_c a_c^2I^2(a_c))].
The peak of the potential, which happens at the bounce, leads to a maximum k
k^2_M = 3(1-λ)/2α^2 ⇒k^2_M/H_0^2 =
3(1-λ)Ω_λ/2 a_b^1+3λ.
As 10^-31 < a_b ≪ 10^-11 (see Ref. <cit.> for an estimation on that,
remembering that we are setting a_0=1),
this is a huge physical frequency, and implies a minimum
physical wavelength many orders of magnitude smaller than the Hubble radius today.
For frequencies smaller than this huge maximum frequency,
the term I^2(a_c) dominates in Eq. (<ref>). In fact, as the integrand in Eq. (<ref>)
is a decreasing function of a, one has
a_c |a'_c| I(a_c) = 2a_c |a'_c|∫_a_b^a_c da/a^2 |a'|≫ 2 a_c |a'_c|(a_c- a_b)/a_c^2 |a'_c|≃ 2,
when a_c ≫ a_b, which is the case for k≪ k_M. As in the crossing
a”_c/a_c ≃ (a'c/a_c)^2 ≃ k^2, and as
I(a_c) = 2∫_a_b^a_c da/a^2 |a'|≃ 2∫_a_b^a_q da/a^2 |a'|≡ I_q,
because the integrand in I(a_c) is dominated by small values of a (a_q denotes the scale factor in the beginning of the quantum phase), the energy density can be expressed as
Ω_g∝k^6 l_p^2/3 π^2 H_0^2I^2_q a_c^4,
nothing that I_q does not depend on a_c. As
a_c≈(H_0^2 Ω_λ/k^2)^1/1+3λ_c,
we obtain
Ω_g ∝ l^2_p/3 π^2 I^2_q (Ω_λ_c/2)^4/1+3λ_c(k/H_0)^2(9λ_c-1)/1+3λ_c
∝ k^2(9λ_c-1)/1+3λ_c,
where λ_c is the equation of state parameter of the fluid which is dominating the background when the mode is leaving the Hubble radius[In a cosmological model described by general relativity
with single fluid domination, leaving the
Hubble radius is the same as leaving the curvature scale and as crossing the potential].
Equation (<ref>) shows that frequencies that crosses the potential in the dust era (λ=0) have energy density decaying with f^-2; the ones entering the potential in the radiation era have energy density growing with f^2; and frequencies that crosses the potential in the asm era have energy density growing with f^4.
For frequencies k ≥ k_M, the integral I_c is zero, since the waves never crosses the curvature scale. In this case, Eq. (<ref>) is dominated by the first term inside the braces, and hence the energy density grows also
as f^4. It is the usual flat spacetime ultraviolet divergence. These behaviors are shown
in Figs. (<ref>,<ref>,<ref>) below.
Concerning the amplitudes, the term which contributes mostly to the energy density is the quantum part of the integral Eq. (<ref>):
I_q = ∫_a_b^a_qda/a^2 |a'| = 1/α∫^a_q_a_bda/a^2 √((a_b/a)^3λ-1-(a_b/a)^2)
= 2/H_0 √(Ω_λ a_b^3(1-λ))arctan(√((a_q/a_b)^3(1-λ)-1))/3(1-λ).
Its dependency on λ shows that it decreases until λ≈ 1+2/3 ln (a_b), when it reaches its minimum value, then it increases rapidly to infinity, when λ=1, as shown in Fig <ref>.
However, λ is limited to the constraint Eq. (<ref>), which is also indicated
in Fig <ref>. It shows that, although the energy density increases more in frequency for higher values
of λ as shown in Eq. (<ref>), the value of I_q decreases significantly with λ in
its physical allowed region, as shown in Fig <ref>. The combination of these two behaviors implies a net decreasing in the amplitude with respect to the case without the asm fluid, as shown in
Fig. <ref>.
Note that the usual increasing in the energy density due to the depth of the bounce is quite suppressed due to the presence of the asm fluid. Indeed, the ratio between different gravitational waves energy densities for two different bouncing models with
different scale factors at the bounce, a_b1 and a_b2, reads, using Eq. (<ref>),
Ω_g1/Ω_g2 = (a_b2/a_b1)^3(1-λ)arctan(√((a_q/a_b1)^3(1-λ)-1))/arctan(√((a_q/a_b_2)^3(1-λ)-1)).
Hence, for fluids with state parameter close to 1 dominating during the bounce, the increase in intensity due to the bounce depth is exponentially suppressed, as shown in Fig. <ref>.
Let us now summarize the properties inferred by our analytical approximations, and exhibit them with numerical calculations:
* The energy density of primordial gravitational waves decreases with the energy density of the fluid which dominates at the bounce. See equations Eq. (<ref>) and Eq. (<ref>), together with
Fig. <ref>.
* The increasing of the energy density of primordial gravitational waves in frequency for increasing
λ with 1/3 < λ <1 does not usually compensate the decreasing of its intensity due to the decreasing of I_q with λ presented in Fig. <ref>. This compensation usually happens only for very high frequencies, inaccessible by nowadays experiments, see Fig. <ref>.
* The energy density of primordial gravitational waves is more sensitive to the depth of the bounce for lower equation of state parameters λ, as shown in the Eq. (<ref>). This sensitivity is shown in Fig. <ref>.
* Finally, Fig. <ref> presents one of the highest energy densities of primordial gravitational waves we found for one particular bouncing model, comparing it with results from inflation and present observational bounds. Note that the amplitude is still far below possible observations.
§ CONCLUSION
In bouncing models containing K-essence scalar fields simulating hydrodynamical fluids with c_s^2=λ,
the amplitude of primordial gravitational waves produced is usually very small <cit.>
for cosmological scales, or low frequencies, but it can grow significantly at high frequencies
if the fluid which dominates the background dynamics at the bounce is as close
to stiff matter as possible. In this paper, we have shown that this can indeed be true, as one can infer
from Eq. (<ref>), but we have also seen that the amplitude of gravitational waves does also
depend on I_q defined on Eq. (<ref>), which gets smaller when the bounce fluid approaches stiff matter.
We have seen that the compromise between these two effects makes the amplitude of primordial gravitational waves
not sufficiently big at high frequencies in order to be detected by present day or near future observations
for background models being symmetric around the bounce, and satisfying the nucleosynthesis bounds.
These conclusions are corroborated by
Figs. (<ref>,<ref>,<ref>,<ref>),
based on numerical calculations, and understood through analytical considerations. Hence, it seems that
bouncing models where the background is dominated by hydrodynamical fluids do not present any significant
amount of primordial gravitational waves at any frequency range compatible with observations. Any detection
of such waves will then rule out this kind of models.
An alternative would be to consider bouncing models which
are not symmetric around the bounce due, e.g., to particle production near the bounce <cit.>.
In this case, one could suppose that radiation was created after the bounce, and the nucleosynthesis bounds
originating constraint Eq. (<ref>) could be relaxed, because in the
contracting phase there would be almost
no radiation. It would be a bouncing model with some sort of reheating. In this case, one could have λ
as close to 1 as necessary, yielding a sufficiently big I_q as indicated by the λ≈ 1 part of
Fig. <ref>. In this case, the model could produce a sufficient amount of relic gravitational waves
that could be detected.
N.P.N. would like to thank
CNPq of Brazil for financial support, and A.C.S.
would like to thank FAPERJ of Rio de Janeiro for financial support.
|
http://arxiv.org/abs/1701.07706v3 | 20170126135900 | Global properties of biconservative surfaces in $\mathbb{R}^3$ and $\mathbb{S}^3$ | [
"Simona Nistor",
"Cezar Oniciuc"
] | math.DG | [
"math.DG",
"Primary 53A10, Secondary 53C40, 53C42"
] |
Global properties of biconservative surfaces in ℝ^3 and 𝕊^3]
Global properties of biconservative surfaces
in ℝ^3 and 𝕊^3
Faculty of Mathematics - Research Department
Al. I. Cuza University of Iasi
Bd. Carol I, 11
700506 Iasi, Romania nistor.simona@ymail.com
Faculty of Mathematics
Al. I. Cuza University of Iasi
Bd. Carol I, 11
700506 Iasi, Romania oniciucc@uaic.ro
The authors' work was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS - UEFISCDI, project number PN-II-RU-TE-2014-4-0004.
[2010]Primary 53A10; Secondary 53C40, 53C42
We survey some recent results on biconservative surfaces in 3-dimensional space forms N^3(c) with a special emphasis on the c=0 and c=1 cases. We study the local and global properties of such surfaces, from extrinsic and intrinsic point of view. We obtain all non-CMC complete biconservative surfaces in ℝ^3 and 𝕊^3.
[
Cezar Oniciuc
=================
§ INTRODUCTION
The study of submanifolds with constant mean curvature, i.e., CMC submanifolds, and, in particular, that of CMC surfaces in 3-dimensional spaces, represents a very active research topic in Differential Geometry for more than 50 years.
There are several ways to generalize these submanifolds. For example, keeping the CMC hypothesis and adding other geometric hypotheses to the submanifold or, by contrast, in the particular case of hypersurfaces in space forms, studying the hypersurfaces which are “highly non-CMC”.
The biconservative submanifolds seem to be an interesting generalization of CMC submanifolds. Biconservative submanifolds in arbitrary manifolds (and in particular, biconservative surfaces) which are also CMC have some remarkable properties (see, for example <cit.>). CMC hypersurfaces in space forms are trivially biconservative, so more interesting is the study of biconservative hypersurfaces which are non-CMC; recent results in non-CMC biconservative hypersurfaces were obtained in <cit.>.
The biconservative submanifolds are closely related to the biharmonic submanifolds. More precisely, let us consider the bienergy functional defined for all smooth maps between two Riemannian manifolds (M^m,g) and (N^n,h) and given by
E_2(φ)=1/2∫_M|τ(φ)|^2 v_g, φ∈ C^∞(M,N),
where τ(φ) is the tension field of φ. A critical point of E_2 is called a biharmonic map and is characterized by the vanishing of the bitension field τ_2(φ) (see <cit.>).
A Riemannian immersion φ:M^m→(N^n,h) or, simply, a submanifold M of N, is called biharmonic if φ is a biharmonic map.
Now, if φ:M→(N,h) is a fixed map, then E_2 can be thought as a functional defined on the set of all Riemannian metrics on M. This new functional's critical points are Riemannian metrics determined by the vanishing of the stress-bienergy tensor S_2. This tensor field satisfies
S_2=-⟨τ_2(φ),dφ⟩.
If S_2=0 for a submanifold M in N, then M is called a biconservative submanifold and it is characterized by the fact that the tangent part of its bitension field vanishes. Thus we can expect that the class of biconservative submanifolds to be much larger than the class of biharmonic submanifolds.
The paper is organized as follows. After a section where we recall some notions and general results about biconservative submanifolds, we present in Section 3 the local, intrinsic characterization of biconservative surfaces. The local, intrinsic characterization theorem provides the necessary and sufficient conditions for an abstract surface (M^2,g) to admit, locally, a biconservative embedding with positive mean curvature function f and f≠ 0 at any point.
Our main goal is to extend the local classification results for biconservative surfaces in N^3(c), with c=0 and c=1, to global results, i.e., we ask that biconservative surfaces to be complete, with f>0 everywhere and | f|>0 on an open dense subset.
In Section 4 we consider the global problem and construct complete biconservative surfaces in ℝ^3 with f>0 on M and f≠ 0 at any point of an open dense subset of M. We determine such surfaces in two ways. One way is to use the local, extrinsic characterization of biconservative surfaces in ℝ^3 and “glue” two pieces together in order to obtain a complete biconservative surface. The other way is more analytic and consists in using the local, intrinsic characterization theorem in order to obtain a biconservative immersion from (ℝ^2,g_C_0) in ℝ^3 with f>0 on ℝ^2 and | f|> 0 on an open dense subset of ℝ^2 (the immersion has to be unique); here, C_0 is a positive constant and therefore we obtain a one-parameter family of solutions. It is worth mentioning that, by a simple transformation of the metric g_C_0, (ℝ^2,√(-K_C_0)g_C_0) is (intrinsically) isometric to a helicoid.
In the last section we consider the global problem of biconservative surfaces in 𝕊^3 with f>0 on M and f≠ 0 at any point of an open dense subset of M. As in the ℝ^3 case, we use the local, extrinsic classification of biconservative surfaces in 𝕊^3, but now the “gluing” process is not as clear as in ℝ^3. Further, we change the point of view and use the local, intrinsic characterization of biconservative surfaces in 𝕊^3. We determine the complete Riemannian surfaces (ℝ^2,g_C_1,C^∗_1) which admit a biconservative immersion in 𝕊^3 with f>0 everywhere and | f|>0 on an open dense subset of ℝ^2 and we show that, up to isometries, there exists only a one-parameter family of such Riemannian surfaces indexed by C_1.
We end the paper with some figures, obtained for particular choices of the constants, which represent the non-CMC complete biconservative surfaces in ℝ^3 and the way how these surfaces can be obtained in 𝕊^3.
§ BICONSERVATIVE SUBMANIFOLDS; GENERAL PROPERTIES
Throughout this work, all manifolds, metrics, maps are assumed to be smooth, i.e. in the C^∞ category, and we will often indicate the various Riemannian metrics by the same symbol ⟨,⟩. All surfaces are assumed to be connected and oriented.
A harmonic map φ:(M^m,g)→(N^n,h) between two Riemannian manifolds is a critical point of the energy functional
E:C^∞(M,N)→ℝ, E(φ)=1/2∫_M|dφ|^2 v_g,
and it is characterized by the vanishing of its tension field
τ(φ)=_g ∇ dφ.
The idea of the stress-energy tensor associated to a functional comes from D. Hilbert (<cit.>). Given a functional E, one can associate to it a symmetric 2-covariant tensor field S such that S=0 at the critical points of E. When E is the energy functional, P. Baird and J. Eells (<cit.>), and A. Sanini (<cit.>), defined the tensor field
S=e(φ)g-φ^∗ h=1/2|dφ|^2g-φ^∗ h,
and proved that
S=-⟨τ(φ),dφ⟩.
Thus, S can be chosen as the stress-energy tensor of the energy functional. It is worth mentioning that S has a variational meaning. Indeed, we can fix a map φ:M^m→(N^n,h) and think E as being defined on the set of all Riemannian metrics on M. The critical points of this new functional are Riemannian metrics determined by the vanishing of their stress-energy tensor S.
More precisely, we assume that M is compact and denote
𝒢={g : g is a Riemannian metric on M}.
For a deformation {g_t} of g we consider ω=.d/dt|_t=0g_t∈ T_g𝒢=C(⊙^2T^∗ M). We define the new functional
ℱ:𝒢→ℝ, ℱ(g)=E(φ)
and we have the following result.
Let φ:M^m→(N^n,h) and assume that M is compact. Then
.d/dt|_t=0ℱ(g_t)=1/2∫_M ⟨ω,e(φ)g-φ^∗ h⟩ v_g.
Therefore g is a critical point of ℱ if and only if its stress-energy tensor S vanishes.
We mention here that, if φ:(M^m,g)→(N^n,h) is an arbitrary isometric immersion, then S=0.
A natural generalization of harmonic maps is given by biharmonic maps. A biharmonic map φ:(M^m,g)→(N^n,h) between two Riemannian manifolds is a critical point of the bienergy functional
E_2:C^∞(M,N)→ℝ, E_2(φ)=1/2∫_M|τ(φ)|^2 v_g,
and it is characterized by the vanishing of its bitension field
τ_2(φ)=-Δ^φτ(φ)-_g R^N(dφ,τ(φ))dφ,
where
Δ^φ=-_g(∇^φ∇^φ-∇^φ_∇)
is the rough Laplacian of φ^-1TN and the curvature tensor field is
R^N(X,Y)Z=∇^N_X∇^N_Y Z-∇^N_Y∇^N_X Z-∇^N_[X,Y] Z, ∀ X,Y,Z∈ C(TM).
We remark that the biharmonic equation τ_2(φ)=0 is a fourth-order non-linear elliptic equation and that any harmonic map is biharmonic. A non-harmonic biharmonic map is called proper biharmonic.
In <cit.>, G. Y. Jiang defined the stress-energy tensor S_2 of the bienergy (also called stress-bienergy tensor) by
S_2(X,Y) = 1/2|τ(φ)|^2⟨ X,Y⟩ +⟨ dφ,∇τ(φ)⟩⟨ X,Y⟩
- ⟨ dφ(X),∇_Y τ(φ)⟩ - ⟨ dφ(Y),∇_X τ(φ)⟩,
as it satisfies
S_2=-⟨τ_2(φ),dφ⟩.
The tensor field S_2 has a variational meaning, as in the harmonic case. We fix a map φ:M^m→(N^n,h) and define a new functional
ℱ_2:𝒢→ℝ, ℱ_2(g)=E_2(φ).
Then we have the following result.
Let φ:M^m→(N^n,h) and assume that M is compact. Then
.d/dt|_t=0ℱ_2(g_t)=-1/2∫_M ⟨ω,S_2⟩ v_g,
so g is a critical point of ℱ_2 if and only if S_2=0.
We mention that, if φ:(M^m,g)→(N^n,h) is an isometric immersion then S_2 does not necessarily vanish.
A submanifold of a given Riemannian manifold (N^n,h) is a pair (M^m,φ), where M^m is a manifold and φ:M→ N is an immersion. We always consider on M the induced metric g=φ^∗ h, thus φ:(M,g)→ (N,h) is an isometric immersion; for simplicity we will write φ:M→ N without mentioning the metrics. Also, we will write φ:M → N, or even M, instead of (M,φ).
A submanifold φ:M^m→ N^n is called biharmonic if the isometric immersion φ is a biharmonic map from (M^m,g) to (N^n,h).
Even if the notion of biharmonicity may be more appropriate for maps than for submanifolds, as the domain and the codomain metrics are fixed and the variation is made only through the maps, the biharmonic submanifolds proved to be an interesting notion (see, for example, <cit.>).
In order to fix the notations, we recall here only the fundamental equations of first order of a submanifold in a Riemannian manifold. These equations define the second fundamental form, the shape operator and the connection in the normal bundle. Let φ:M^m→ N^n be an isometric immersion. For each p∈ M, T_φ(p)N splits as an orthogonal direct sum
T_φ(p)N=dφ(T_pM)⊕ dφ(T_pM)^⊥,
and NM=⋃_p∈ Mdφ(T_pM)^⊥ is referred to as the normal bundle of φ, or of M, in N.
Denote by ∇ and ∇^N the Levi-Civita connections on M and N, respectively, and by ∇^φ the induced connection in the pull-back bundle
φ^-1(TN)=⋃_p∈ MT_φ(p)N.
Taking into account the decomposition in (<ref>),
one has
∇^φ_X dφ(Y)=dφ(∇_X Y)+B(X,Y), ∀ X, Y∈ C(TM),
where B∈ C(⊙^2 T^∗ M⊗ NM) is called the second fundamental form of M in N. Here T^∗ M denotes the cotangent bundle of M. The mean curvature vector field of M in N is
defined by H=( B)/m∈ C(NM), where the is considered with respect to the metric g.
Furthermore, if η∈ C(NM), then
∇^φ_X η=-dφ(A_η(X))+∇^⊥_Xη, ∀ X∈ C(TM),
where A_η∈ C(T^∗ M⊗ TM) is called the shape operator of M in N in the direction of η, and ∇^⊥ is the induced connection in the
normal bundle. Moreover, ⟨ B(X,Y),η⟩=⟨ A_η(X),Y⟩, for all X, Y∈ C(TM), η∈ C(NM). In the case of hypersurfaces, we denote f= A, where A=A_η and η is the unit normal vector field, and we have H=(f/m)η; f is the (m times) mean curvature function.
A submanifold M of N is called PMC if H is parallel in the normal bundle, and CMC if |H| is constant.
When confusion is unlikely we identify, locally, M with its image through φ, X with dφ(X) and ∇^φ_X dφ(Y) with ∇^N_X Y. With these identifications in mind, we write
∇^N_X Y=∇_X Y+B(X,Y),
and
∇^N_X η=-A_η(X)+∇^⊥_Xη.
If S_2=0 for a submanifold M in N, then M is called biconservative. Thus, M is biconservative if and only if the tangent part of its bitension field vanishes.
We have the following characterization theorem of biharmonic submanifolds, obtained by splitting the bitension field in the tangent and normal part.
A submanifold M^m of a Riemannian manifold N^n is biharmonic if and only if
A_∇^⊥_· H(·)+∇ A_H +( R^N(·,H)·)^T=0
and
Δ^⊥ H+ B(·,A_H(·)) +(R^N(·,H)·)^⊥=0,
where Δ^⊥ is the Laplacian in the normal bundle.
Various forms of the above result were obtained in <cit.>. From here we deduce some characterization formulas for the biconservativity.
Let M^m be a submanifold of a Riemannian manifold N^n. Then M is a biconservative submanifold if and only if:
* A_∇^⊥_· H(·)+∇ A_H +( R^N(·,H)·)^T=0;
* m/2(|H|^2)+2 A_∇^⊥_· H(·) + 2( R^N(·,H)·)^T=0;
* 2∇ A_H-m/2(|H|^2)=0.
The following properties are immediate.
Let M^m be a submanifold of a Riemannian manifold N^n. If ∇ A_H=0 then M is biconservative.
Let M^m be a submanifold of a Riemannian manifold N^n. Assume that N is a space form, i.e., it has constant sectional curvature, and M is PMC. Then M is biconservative.
Let M^m be a submanifold of a Riemannian manifold N^n. Assume that M is pseudo-umbilical, i.e., A_H=|H|^2I, and m≠4. Then M is CMC.
If we consider the particular case of hypersurfaces, then Theorem <ref> becomes
If M^m is a hypersurface in a Riemannian manifold N^m+1, then M is biharmonic if and only if
2A( f)+f f-2f(^N(η))^T=0,
and
Δ f+f|A|^2-f^N(η,η)=0,
where η is the unit normal vector field of M in N.
A hypersurface M^m in a space form N^m+1(c) is biconservative if and only if
A( f)=-f/2 f.
Any CMC hypersurface in N^m+1(c) is biconservative.
Therefore, the biconservative hypersurfaces may be seen as the next research topic after that of CMC surfaces.
§ INTRINSIC CHARACTERIZATION OF BICONSERVATIVE SURFACES
We are interested to study biconservative surfaces which are non-CMC. We will first look at them from a local, extrinsic point of view and then from a global point of view. While by “local” we will mean the biconservative surfaces φ:M^2→ N^3(c) with f>0 and f≠ 0 at any point of M, by “global” we will mean the complete biconservative surfaces φ:M^2→ N^3(c) with f>0 at any point of M and f≠ 0 at any point of an open and dense subset of M.
In this section, we consider the local problem, i.e., we take φ:M^2→ N^3(c) a biconservative surface and assume that f>0 and f≠ 0 at any point of M. Let X_1=( f)/| f| and X_2 two vector fields such that {X_1(p),X_2(p)} is a positively oriented orthonormal basis at any point p∈ M. In particular, we obtain that M is parallelizable. If we denote by λ_1≤λ_2 the eigenvalues functions of the shape operator A, since A(X_1)=-(f/2)X_1 and A=f, we get λ_1=-f/2 and λ_2=3f/2. Thus the matrix of A with respect to the (global) orthonormal frame field {X_1,X_2} is
A=(
[ -f/2 0; ; 0 3f/2 ]).
We denote by K the Gaussian curvature and, from the Gauss equation, K=c+ A, we obtain
f^2=4/3(c-K).
Thus c-K>0 on M.
From the definitions of X_1 and X_2, we find that
f=(X_1 f)X_1 and X_2 f=0.
Using the connection 1-forms, the Codazzi equation and then the extrinsic and intrinsic expression for the Gaussian curvature, we obtain the next result which shows that the mean curvature function of a non-CMC biconservative surface must satisfy a second-order partial differential equation. More precisely, we have the following theorem.
Let φ:M^2→ N^3(c) a biconservative surface with f>0 and f≠ 0 at any point of M. Then we have
fΔ f+| f|^2+4/3c f^2-f^4=0,
where Δ is the Laplace-Beltrami operator on M.
In fact, we can see that around any point of M there exists (U;u,v) local coordinates such that f=f(u,v)=f(u) and (<ref>) is equivalent to
ff^''-7/4(f^')^2-4/3cf^2+f^4=0,
i.e., f must satisfy a second-order ordinary differential equation.
Indeed, let p_0∈ M be an arbitrary fixed point of M and let γ=γ(u) be an integral curve of X_1 with γ(0)=p_0. Let ϕ the flow of X_2 and (U;u,v) local coordinates with p_0∈ U such that
X(u,v)=ϕ_γ(u)(v)=ϕ(γ(u),v).
We have
X_u(u,0)=γ^'(u)=X_1(γ(u))=X_1(u,0)
and
X_v(u,v)=ϕ^'_γ(u)(v)=X_2(ϕ_γ(u)(v))=X_2(u,v).
If we write the Riemannian metric g on M in local coordinates as
g=g_11du^2+2g_12du dv+g_22dv^2,
we get g_22=|X_v|^2=|X_2|^2=1, and X_1 can be expressed with respect to X_u and X_v as
X_1=1/σ(X_u-g_12X_v)=σ u,
where σ=√(g_11-g_12^2)>0, σ=σ(u,v).
Let f∘ X=f(u,v). Since X_2f=0, we find that
f(u,v)=f(u,0)=f(u), ∀ (u,v)∈ U.
It can be proved that
[X_1,X_2]=3(X_1 f)/4fX_2,
and thus X_2 X_1 f=X_1 X_2 f-[X_1,X_2]f=0.
On the other hand we have
[ X_2 X_1 f = X_v(1/σf^')=X_v(1/σ)f^'; = 0 ].
We recall that
f=(X_1 f)X_1=(1/σf^')X_1≠ 0
at any point of U, and then f^'≠ 0 at any point of U. Therefore, from (<ref>), X_v(1/σ)=0, i.e., σ=σ(u). Since g_11(u,0)=1, and g_12(u,0)=0, we have σ=1, i.e.,
X_1=X_u-g_12X_v= u.
In <cit.> it was found an equivalent expression for (<ref>), i.e.,
(X_1X_1f)f=7/4(X_1 f)^2+4c/3f^2-f^4.
Therefore, using (<ref>), relation (<ref>) is equivalent to (<ref>).
If φ:M^2→ N^3(c) is a non-CMC biharmonic surface, then, there exists an open subset U such that f>0, f≠ 0 at any point of U, and f satisfies the following system
{[ Δ f=f(2c-|A|^2); ; A( f)=-f/2 f ]. .
As we have seen, this system implies
{[ Δ f=f(2c-|A|^2); ; fΔ f+| f|^2+4/3c f^2-f^4=0 ]. .
which, in fact, is a ODE system. We get
{[ ff^''-3/4(f^')^2+2cf^2-5/2f^4=0; ; ff^''-7/4(f^')^2-4/3cf^2+f^4=0 ]. .
As an immediate consequence we obtain
(f^')^2+10/3cf^2-7/2f^4=0,
and combining it with the first integral
(f^')^2=2f^4-8cf^2+α f^3/2
of the first equation from (<ref>), where α∈ℝ is a constant, we obtain
3/2f^5/2+14/3cf^1/2-α=0.
If we denote f̃=f^1/2, we get 3f̃^5/2+14 cf̃/3-α=0. Thus, f̃ satisfies a polynomial equation with constant coefficients, so f̃ has to be a constant and then, f is a constant, i.e., f=0 on U (in fact, f has to be zero). Therefore, we have a contradiction (see <cit.> for c=0 and <cit.>, for c=± 1).
We can also note that relation (<ref>), which is an extrinsic relation, together with (<ref>), allows us to find an intrinsic relation that (M,g) must satisfy. More precisely, the Gaussian curvature of M has to satisfy
(c-K)Δ K-| K|^2-8/3K(c-K)^2=0,
and the conditions c-K>0 and K≠0.
Formula (<ref>) is very similar to the Ricci condition. Further, we will briefly recall the Ricci problem. Given an abstract surface (M^2,g), we want to find the conditions that have to be satisfied by M such that, locally, it admits a minimal embedding in N^3(c). It was proved (see <cit.>) that if (M^2,g) is an abstract surface such that c-K>0 at any point of M, where c∈ℝ is a constant, then, locally, it admits a minimal embedding in N^3(c) if and only if
(c-K)Δ K-| K|^2-4K(c-K)^2=0.
Condition (<ref>) is called the Ricci condition with respect to c, or simply the Ricci condition. If (<ref>) holds, then, locally, M admits a one-parameter family of minimal embeddings in N^3(c).
We can see that relations (<ref>) and (<ref>) are very similar and, in <cit.>, the authors studied the link between them. Thus, for c=0, it was proved that if we consider a surface (M^2,g) which satisfies (<ref>) and K<0, then there exists a very simple conformal transformation of the metric g such that (M^2,√(-K)g) satisfies (<ref>). A similar result was also proved for c≠ 0, but in this case, the conformal factor has a complicated expression (and it is not enough to impose that (M^2,g) satisfy (<ref>), but we need the stronger hypothesis of it to admit a non-CMC biconservative immersion in N^3(c)).
Unfortunately, condition (<ref>) does not imply, locally, the existence of a biconservative immersion in N^3(c), as in the minimal case. We need a stronger condition. It was obtained the following local, intrinsic characterization theorem.
Let (M^2,g) be an abstract surface and c∈ℝ a constant. Then, locally, M can be isometrically embedded in a space form N^3(c) as a biconservative surface with positive mean curvature having the gradient different from zero at any point if and only if the Gaussian curvature K satisfies c-K(p)>0, ( K)(p)≠ 0, for any point p∈ M, and its level curves are circles in M with constant curvature
κ=3| K|/8(c-K).
If the surface M in Theorem <ref> is simply connected, then the theorem holds globally, but, in this case, instead of a local isometric embedding we have a global isometric immersion.
We remark that unlike in the minimal immersions case, if M satisfies the hypotheses from Theorem <ref>, then there exists a unique biconservative immersion in N^3(c) (up to an isometry of N^3(c)), and not a one-parameter family.
The characterization theorem can be equivalently rewritten as below.
Let (M^2,g) be an abstract surface with Gaussian curvature K satisfying c-K(p)>0 and ( K)(p)≠ 0 at any point p∈ M, where c∈ℝ is a constant. Let X_1=( K)/| K| and X_2∈ C(TM) be two vector fields on M such that {X_1(p),X_2(p)} is a positively oriented basis at any point of p∈ M. Then, the following conditions are equivalent:
(a) the level curves of K are circles in M with constant curvature
κ=3| K|/8(c-K)=3X_1K/8(c-K);
(b)
X_2(X_1K)=0 and ∇_X_2X_2=-3X_1K/8(c-K)X_1;
(c) locally, the metric g can be written as g=(c-K)^-3/4(du^2+dv^2), where (u,v) are local coordinates positively oriented, K=K(u), and K^'>0;
(d) locally, the metric g can be written as g=e^2φ(du^2+dv^2), where (u,v) are local coordinates positively oriented, and φ=φ(u) satisfies the equation
φ^''=e^-2φ/3-ce^2φ
and the condition φ^'>0; moreover, the solutions of the above equation, u=u(φ), are
u=_φ_0^φdτ/√(-3e^-2τ/3-ce^2τ+a)+u_0,
where φ is in some open interval I and a,u_0∈ℝ are constants;
(e) locally, the metric g can be written as g=e^2φ(du^2+dv^2), where (u,v) are local coordinates positively oriented, and φ=φ(u) satisfies the equation
3φ^'''+2φ^'φ^''+8ce^2φφ^'=0
and the conditions φ^'>0 and c+e^-2φφ^''>0; moreover, the solutions of the above equation, u=u(φ), are
u=_φ_0^φdτ/√(-3be^-2τ/3-ce^2τ+a)+u_0,
where φ is in some open interval I and a,b,u_0∈ℝ are constants, b>0.
The proof follows by direct computations and by using Remark 4.3 in <cit.> and Proposition 3.4 in <cit.>.
From the above theorem we have the following remarks.
(i) If condition (a) is satisfied, i.e., the integral curves of X_2 are circles in M with a precise constant curvature, then the integral curves of X_1 are geodesics of M.
(ii) If condition (c) is satisfied, then K has to be a solution of the equation
3K^''(c-K)+3(K^')^2+8K(c-K)^5/4=0.
(iii) If condition (c) is satisfied and c>0, then (M^2, (c-K)^3/4g) is a flat surface and, trivially, a Ricci surface with respect to c.
(iv) Let φ=φ(u) be a solution of equation (<ref>). We consider the change of coordinates
(u,v)=(αũ+β,αṽ+β),
where α∈ℝ is a positive constant and β∈ℝ, and define
ϕ=φ(αũ+β)+logα.
Then g=e^2ϕ(dũ^2+dṽ^2) and ϕ also satisfies equation (<ref>). If φ=φ(u) satisfies the first integral
φ^''=be^-2φ/3-ce^2φ,
where b>0, then, for α=b^-3/8, ϕ=ϕ(ũ) satisfies
ϕ^''=e^-2ϕ/3-ce^2ϕ.
From here, as the classification is done up to isometries, we note that the parameter b in the solution of (<ref>) is not essential and only the parameter a counts. Thus we have a one-parameter family of solutions.
(v) If φ is a solution of (<ref>), for some c, then φ+α, where α is a real constant, is a solution of (<ref>) for ce^2α.
(vi) If c=0, we note that if φ is a solution of (<ref>), then also φ+ is a solution of the same equation, i.e, condition (a) from Theorem <ref> is invariant under the homothetic tranformations of the metric g. Then, we see that equation (<ref>) is invariant under the affine change of parameter u=αũ+β, where α>0. Therefore, we must solve equation (<ref>) up to this change of parameter and an additive constant of the solution φ. The additive constant will be the parameter that counts.
In the c=0 case, the solutions of equation (<ref>), are explicitly determined in the next proposition.
The solutions of the equation
3φ^'''+2φ^'φ^''=0
which satisfy the conditions φ^'>0 and φ^''>0, up to affine transformations of the parameter with α>0, are given by
φ(u)=3log(cosh u)+, u>0.
We note that, when c=0, we have a one-parameter family of solutions of equation (<ref>), i.e., g_C_0=C_0(cosh u)^6(du^2+dv^2), C_0 being a positive constant.
If c≠0, then we can not determine explicitly φ=φ(u). Another way to see that in the c≠ 0 case we have only a one-parameter family of solutions of equation (<ref>) is to rewrite the metric g in certain non-isothermal coordinates.
Further, we will consider only the c=1 case and we have the next result.
Let (M^2,g) be an abstract surface with g=e^2φ(u)(du^2+dv^2), where u=u(φ) satisfies
u=_φ_0^φdτ/√(-3be^-2τ/3-e^2τ+a)+u_0,
where φ is in some open interval I, a,b∈ℝ are positive constants, and u_0∈ℝ is a constant. Then (M^2,g) is isometric to
(D_C_1,g_C_1=3/ξ^2(-ξ^8/3+3C_1ξ^2-3)dξ^2 +1/ξ^2dθ^2),
where D_C_1=(ξ_01,ξ_02)×ℝ, C_1∈(4/(3^3/2),∞) is a positive constant, and ξ_01 and ξ_02 are the positive vanishing points of -ξ^8/3+3C_1ξ^2-3, with 0<ξ_01<ξ_02.
Let us consider
(D_C_1,g_C_1=3/ξ^2(-ξ^8/3+3C_1ξ^2-3)dξ^2+ 1/ξ^2dθ^2)
and
(D_C_1^',g_C_1^'=3/ξ̃^2(-ξ̃^8/3+3C_1^'ξ̃^2-3)dξ̃^2+1/ξ̃^2dθ̃^2).
The surfaces (D_C_1,g_C_1) and (D_C_1^',g_C_1^') are isometric if and only if C_1=C_1^' and the isometry is Θ(ξ,θ)=(ξ,±θ+). Therefore, we have a one-parameter family of surfaces.
We note that the expression of the Gaussian curvature of (D_C_1,g_C_1) does not depend on C_1. More precisely,
K_C_1(ξ,θ)=-1/9ξ^8/3+1.
But, if we change further the coordinates (ξ,θ)=(ξ_01+ξ̃(ξ_02-ξ_01),θ̃), then we “fix” the domain, i.e., (D_C_1,g_C_1) is isometric to ((0,1),g̃_C_1) and C_1 appears in the expression of K_C_1(ξ̃,θ̃).
§ COMPLETE BICONSERVATIVE SURFACES IN ℝ^3
In this section we consider the global problem and construct complete biconservative surfaces in ℝ^3 with f>0 everywhere and f≠ 0 at any point of an open dense subset. Or, from intrinsic point of view, we construct a complete abstract surface (M^2,g) with K<0 everywhere and K≠ 0 at any point of an open dense subset of M, that admits a biconservative immersion in ℝ^3, defined on the whole M, with f>0 on M and | f|>0 on the open dense subset.
First, we recall a local extrinsic result which provides a characterization of biconservative surfaces in ℝ^3.
Let M^2 be a surface in ℝ^3 with f(p)>0 and ( f)(p)≠0 for any p∈ M. Then, M is biconservative if and only if, locally, it is a surface of revolution, and the curvature κ=κ(u) of the profile curve σ=σ(u), |σ^'(u)|=1, is a positive solution of the following ODE
κ^''κ=7/4(κ^')^2-4κ^4.
In <cit.> there was found the local explicit parametric equation of a biconservative surface in ℝ^3.
Let M^2 be a biconservative surface in ℝ^3 with f(p)>0 and ( f)(p)≠0 for any p∈ M. Then, locally, the surface can be parametrized by
X_C̃_0(ρ,v)=(ρcos v,ρsin v, u_C̃_0(ρ)),
where
u_C̃_0(ρ)=3/2C̃_0(ρ^1/3√(C̃_0ρ^2/3-1)+1/√(C̃_0)log(√(C̃_0)ρ^1/3+√(C̃_0ρ^2/3-1)))
with C̃_0 a positive constant and ρ∈(C̃_0^-3/2,∞).
We denote by S_C̃_0 the image X_C̃_0((C̃_0^-3/2,∞)×ℝ). We note that any two such surfaces are not locally isometric, so we have a one-parameter family of biconservative surfaces in ℝ^3. These surfaces are not complete.
If φ:M^2→ℝ^3 is a biconservative surface with f>0 and f≠ 0 at any point, then there exists a unique C̃_0 such that φ(M)⊂ S_C̃_0. Indeed, any point admits an open neighborhood which is an open subset of some S_C̃_0. Let us consider p_0∈ M. Then, there exists a unique C̃_0 such that φ(U)⊂ S_C̃_0, where U is an open neighborhood of p_0. If A denotes the set of all points of M such that they admit open neighborhoods which are open subsets of that S_C̃_0, then the set A is non-empty, open and closed in M. Thus, as M is connected, it follows that A=M.
The “boundary” of S_C̃_0, i.e., S_C̃_0∖ S_C̃_0, is the circle (C̃_0^-3/2cos v,C̃_0^-3/2sin v,0), which lies in the Oxy plane. At a boundary point, the tangent plane to the closure S_C̃_0 of S_C̃_0 is parallel to Oz. Moreover, along the boundary, the mean curvature function is constant f_C̃_0=(2C̃_0^3/2)/3 and f_C̃_0=0.
Thus, in order to obtain a complete biconservative surface in ℝ^3, we can expect to “glue” along the boundary two biconservative surfaces of type S_C̃_0 corresponding to the same C̃_0 (the two constants have to be the same) and symmetric to each other, at the level of C^∞ smoothness.
In fact, it was proved that we can glue two biconservative surfaces S_C̃_0 and S_C̃_0^', at the level of C^∞ smoothness, only along the boundary and, in this case, C̃_0=C̃_0^'.
If we consider the symmetry of u_C, with respect to the Oρ(=Ox) axis, we get a smooth, complete, biconservative surface S̃_C̃_0 in ℝ^3. Moreover, its mean curvature function f̃_C̃_0 is positive and f̃_C̃_0 is different from zero at any point of an open dense subset of S̃_C̃_0.
The profile curve σ_C̃_0=(ρ,0,u_C̃_0(ρ))≡(ρ,u_C̃_0(ρ)) can be reparametrized as
[ σ_C̃_0(θ)= (σ_C̃_0^1(θ), σ_C̃_0^2(θ)); ; = C̃_0^-3/2((θ+1)^3/2,3/2(√(θ^2+θ) + log(√(θ)+√(θ+1)))), θ>0, ]
and now X_C̃_0=X_C̃_0(θ,v).
The homothety of ℝ^3, (x,y,z)→C̃_0(x,y,z), renders S̃_1 onto S̃_C̃_0^-2/3.
In <cit.>, there were also found the complete biconservative surfaces in ℝ^3 with f>0 at any point and f≠ 0 at any point of an open dense subset, but there, the idea was to use the intrinsic characterization of the biconservative surfaces. More precisely, we have the next global result.
Let (ℝ^2,g_C_0=C_0 (cosh u)^6(du^2+dv^2)) be a surface, where C_0∈ℝ is a positive constant. Then we have:
(a) the metric on ℝ^2 is complete;
(b) the Gaussian curvature is given by
K_C_0(u,v)=K_C_0(u)=-3/C_0(cosh u)^8<0, K^'_C_0(u)=24 sinh u/C_0(cosh u)^9,
and therefore K_C_0≠ 0 at any point of ℝ^2∖ Ov;
(c) the immersion φ_C_0:(ℝ^2,g_C_0)→ℝ^3 given by
φ_C_0(u,v)=(σ_C_0^1(u)cos (3v), σ_C_0^1(u)sin (3v), σ_C_0^2(u))
is biconservative in ℝ^3, where
σ_C_0^1(u)=√(C_0)/3(cosh u)^3, σ_C_0^2(u)=√(C_0)/2(1/2sinh (2u)+u), u ∈ℝ.
The first two items follow by standard arguments. For the last part, we note that choosing C̃_0=(9/C_0)^1/3 in (<ref>) and using the change of coordinates (θ,v)=((sinh u)^2,3v), where u>0, the metric induced by X_(9/C_0)^1/3 coincides with g_C_0. Then, we define φ_C_0 as: for u>0, φ_C_0(u,v) is obtained by rotating the profile curve
σ^+_(9/C_0)^1/3(u)=σ_(9/C_0)^1/3(u)=(σ_(9/C_0)^1/3^1(u), σ_(9/C_0)^1/3^2(u)),
and for u<0, φ_C_0(u,v) is obtained by rotating the profile curve
σ^-_(9/C_0)^1/3(u)=(σ_(9/C_0)^1/3^1(-u), -σ_(9/C_0)^1/3^2(-u)).
By simple transformations of the metric, (ℝ^2,g_C_0) becomes a Ricci surface or a surface with constant Gaussian curvature.
Consider the surface (ℝ^2,g_C_0). Then (ℝ^2,√(-K_C_0)g_C_0) is complete, satisfies the Ricci condition and can be minimally immersed in ℝ^3 as a helicoid or a catenoid.
Consider the surface (ℝ^2,g_C_0). Then (ℝ^2,-K_C_0g_C_0) has constant Gaussian curvature 1/3 and it is not complete. Moreover, (ℝ^2,-K_C_0g_C_0) is the universal cover of the surface of revolution in ℝ^3 given by
Z(u,v)=(α(u)cosh(√(3)/av),α(u)sinh(√(3)/av),β(u)), (u,v)∈ℝ^2,
where a∈ (0,√(3)] and
α(u)=a/cosh u, β(u)=_0^u√((3-a^2)cosh^2τ+a^2)/cosh^2τ dτ.
When a=√(3), the immersion Z has only umbilical points and the image Z(ℝ^2) is the round sphere of radius √(3), without the North and the South poles. Moreover, if a∈ (0,√(3)), then Z has no umbilical points.
Concerning the biharmonic surfaces in ℝ^3 we have the following non-existence result.
There exists no proper biharmonic surface in ℝ^3.
§ COMPLETE BICONSERVATIVE SURFACES IN 𝕊^3
As in the previous section, we consider the global problem for biconservative surfaces in 𝕊^3, i.e., our aim is to construct complete biconservative surfaces in 𝕊^3 with f>0 everywhere and f≠ 0 at any point of an open and dense subset.
We start with the following local, extrinsic result.
Let M^2 be a biconservative surface in 𝕊^3 with f(p)>0 and ( f)(p)≠ 0 at any point p∈ M. Then, locally, the surface, viewed in ℝ^4, can be parametrized by
Y_C̃_1(u,v)=σ(u)+4κ(u)^-3/4/3√(C̃_1)( f_1 (cos v -1)+f_2 sin v),
where C̃_1∈(64/(3^5/4),∞) is a positive constant; f_1, f_2∈ℝ^4 are two constant orthonormal vectors; σ(u) is a curve parametrized by arclength that satisfies
⟨σ(u),f_1⟩ = 4κ(u)^-3/4/3√(C̃_1), ⟨σ(u),f_2⟩=0,
and, as a curve in 𝕊^2, its curvature κ=κ(u) is a positive non constant solution of the following ODE
κ^''κ=7/4(κ^')^2+4/3κ^2-4κ^4
such that
(κ^')^2=-16/9κ^2-16κ^4+C̃_1κ^7/2.
The constant C̃_1 determines uniquely the curvature κ, up to a translation of u, and then κ, f_1 and f_2 determines uniquely the curve σ.
We consider f_1=e_3 and f_2=e_4 and change the coordinates (u,v) in (κ,v). Then, we get
[ Y_C̃_1(κ,v)= (√(1-(4/3√(C̃_1)κ^-3/4)^2)cosμ(κ), √(1-(4/3√(C̃_1)κ^-3/4)^2)sinμ(κ),; 4/3√(C̃_1)κ^-3/4cos v, 4/3√(C̃_1)κ^-3/4sin v), ]
where (κ,v)∈(κ_01,κ_02)×ℝ, κ_01 and κ_02 are positive solutions of
-16/9κ^2-16κ^4+C̃_1κ^7/2=0
and
μ(κ)=± 108_κ_0^κ√(C̃_1)τ^3/4/(-16+9C̃_1τ^3/2) √( 9C̃_1τ^3/2-16(1+9τ^2)) dτ +c_0,
with c_0∈ℝ a constant and κ_0∈(κ_01,κ_02). We note that an alternative expression for Y_C̃_1 was given in <cit.>.
The limits lim_κ↘κ_01μ(κ)=μ(κ_01) and lim_κ↗κ_02μ(κ)=μ(κ_02) are finite.
For simplicity, we choose κ_0=(3C̃_1/64)^2.
If we denote S_C̃_1 the image of Y_C̃_1, then we note that the boundary of S_C̃_1 is made up from two circles and along the boundary, the mean curvature function is constant (two different constants) and its gradient vanishes. More precisely, the boundary of S_C̃_1 is given by the curves
[ (√(1-(4/3√(C̃_1)κ_01^-3/4)^2)cosμ(κ_01), √(1-(4/3√(C̃_1)κ_01^-3/4)^2)sinμ(κ_01),; 4/3√(C̃_1)κ_01^-3/4cos v, 4/3√(C̃_1)κ_01^-3/4sin v ) ]
and
[ (√(1-(4/3√(C̃_1)κ_02^-3/4)^2)cosμ(κ_02), √(1-(4/3√(C̃_1)κ_02^-3/4)^2)sinμ(κ_02),; 4/3√(C̃_1)κ_02^-3/4cos v, 4/3√(C̃_1)κ_02^-3/4sin v ). ]
These curves are circles in affine planes in ℝ^4 parallel to the Ox^3x^4 plane and
their radii are (4κ_01^-3/4)/(3√(C̃_1)) and (4κ_02^-3/4)/(3√(C̃_1)), respectively.
At a boundary point, using the coordinates (μ, v), we get that the tangent plane to the closure of S_C̃_1 is spanned by a vector which is tangent to the corresponding circle and by
[ (-√(1-(4/3√(C̃_1)κ_0i^-3/4)^2)sinμ(κ_0i), √(1-(4/3√(C̃_1)κ_0i^-3/4)^2)cosμ(κ_0i),0,0 ), ]
where i=1 or i=2.
Thus, in order to construct a complete biconservative surface in 𝕊^3, we can expect to glue along the boundary two biconservative surfaces of type S_C̃_1, corresponding to the same C̃_1. In fact, if we want to glue two surfaces corresponding to C̃_1 and C̃_1^' along the boundary, then these constants have to coincide and there is no ambiguity concerning along which circle of the boundary we should glue the two pieces. But this process is not as clear as in ℝ^3 since we should repeat it infinitely many times.
Further, as in the ℝ^3 case, we change the point of view and use the intrinsic characterization of the biconservative surfaces in 𝕊^3.
The surface (D_C_1,g_C_1) defined in Section <ref> is not complete but it has the following properties.
Consider (D_C_1, g_C_1). Then, we have
(a) K_C_1(ξ,θ)=K(ξ,θ),
1-K(ξ,θ)=1/9ξ^8/3>0, K^'(ξ)=-8/27ξ^5/3
and K≠ 0 at any point of D_C_1;
(b) the immersion ϕ_C_1:(D_C_1, g_C_1)→𝕊^3 given by
ϕ_C_1(ξ,θ)=(√(1-1/C_1ξ^2)cosζ(ξ),√(1-1/C_1ξ^2)sinζ(ξ),cos(√(C_1)θ)/√(C_1)ξ, sin(√(C_1)θ)/√(C_1)ξ),
is biconservative in 𝕊^3, where
ζ(ξ)=±_ξ_00^ξ√(C_1)τ^4/3/(-1+C_1τ^2) √(-τ^8/3+3C_1τ^2-3) dτ+c_1,
with c_1∈ℝ a constant and ξ_00∈(ξ_01,ξ_02).
The first item follows by standard arguments. For the second item, we note that choosing C̃_1=3^1/4·16C_1 in (<ref>) and using the change of coordinates (κ,v)=(3^-3/2ξ^4/3,(3^-1/8√(C_1)θ)/4), the metric induced by Y_3^1/4· 16C_1 coincides with g_C_1.
Then, we define ϕ_C_1 as
ϕ_C_1(ξ,θ)=Y_3^1/4· 16C_1(3^-3/2ξ^4/3,3^-1/8√(C_1)θ/4).
The limits lim_ξ↘ξ_01ζ(ξ)=ζ(ξ_01) and lim_ξ↗ξ_02ζ(ξ)=ζ(ξ_02) are finite.
For simplicity, we choose ξ_00=(9C_1/4)^3/2.
The immersion ϕ_C_1 depends on the sign ± and on the constant c_1 in the expression of ζ. As the classification is up to isometries of 𝕊^3, the sign and the constant are not important, but they will play an important role in the gluing process.
The construction of complete biconservative surfaces in 𝕊^3 consists in two steps, and the key idea is to notice that (D_C_1,g_C_1) is, locally and intrinsically, isometric to a surface of revolution in ℝ^3.
The first step is to construct a complete surface of revolution in ℝ^3 which on an open dense subset is locally isometric to (D_C_1,g_C_1). We start with the next result.
Let us consider (D_C_1,g_C_1) as above. Then (D_C_1,g_C_1) is the universal cover of the surface of revolution in ℝ^3 given by
ψ_C_1,C_1^∗(ξ,θ)=(χ(ξ)cosθ/C_1^∗, χ(ξ)sinθ/C_1^∗,ν(ξ)),
where χ(ξ)=C_1^∗/ξ,
ν(ξ)=±_ξ_00^ξ√(3τ^2-(C_1^∗)^2(-τ^8/3+3C_1τ^2-3)/τ^4(-τ^8/3 +3C_1τ^2-3)) dτ+c_1^∗,
C_1^∗∈(0,(C_1-4/3^3/2)^-1/2) is a positive constant and c_1^∗∈ℝ is constant.
The immersion ψ_C_1, C_1^∗ depends on the sign ± and on the constant c_1^∗ in the expression of ν. We denote by S^±_C_1,C_1^∗,c_1^∗ the image of ψ_C_1, C_1^∗.
The limits lim_ξ↘ξ_01ν(ξ)=ν(ξ_01) and lim_ξ↗ξ_02ν(ξ)=ν(ξ_02) are finite.
We note that the boundary of S^±_C_1,C_1^∗,c_1^∗ is given by the curves
(C_1^∗/ξ_01cosθ/C_1^∗,C_1^∗/ξ_01sinθ/C_1^∗, ν(ξ_01))
and
(C_1^∗/ξ_02cosθ/C_1^∗,C_1^∗/ξ_02sinθ/C_1^∗, ν(ξ_02))
These curves are circles in affine planes in ℝ^3 parallel to the Oxy plane and their radii are C_1^∗/ξ_01 and C_1^∗/ξ_02, respectively.
At a boundary point, using the coordinates (ν, θ), we get that the tangent plane to the closure of S^±_C_1,C_1^∗,c_1^∗ is spanned by a vector which is tangent to the corresponding circle and by the vector (0,0,1). Thus, the tangent plane is parallel to the rotational axis Oz.
Geometrically, we start with a piece of type S^±_C_1,C_1^∗,c_1^∗ and by symmetry to the planes where the boundary lie, we get our complete surface S̃_C_1,C_1^∗; the process is periodic and we perform it along the whole Oz axis.
Analytically, we fix C_1 and C_1^∗, and alternating the sign and with appropriate choices of the constant c_1^∗, we can construct a complete surface of revolution S̃_C_1,C_1^∗ in ℝ^3 which on an open subset is locally isometric to (D_C_1,g_C_1). In fact, these choices of + and -, and of the constants c_1^∗ are uniquely determined by the “first” choice of +, or of -, and of the constant c_1^∗. We start with + and c_1^∗=0.
The profile curve of S^±_C_1,C_1^∗,c_1^∗ can be seen as the graph of a function depending on ν and this allows us to obtain a function F such that the profile curve of S̃_C_1,C_1^∗ to be the graph of the function χ∘ F depending on ν and defined on the whole Oz (or Oν). The function F:ℝ→[ξ_01,ξ_02] is periodic and at least of class C^3.
The surface of revolution given by
Ψ_C_1,C^∗_1(ν,θ)=((χ∘ F)(ν)cosθ/C^∗_1, (χ∘ F)(ν)sinθ/C^∗_1,ν), (ν,θ)∈ℝ^2,
is complete and, on an open dense subset, it is locally isometric to (D_C_1, g_C_1). The induced metric is given by
g_C_1, C^∗_1(ν,θ)=3F^2(ν)/3F^2(ν)- (C^∗_1)^2(-F^8/3(ν)+3C_1F^2(ν)-3)dν^2 +1/F^2(ν)dθ^2,
(ν,θ)∈ℝ^2. Moreover, K≠ 0 at any point of that open dense subset, and 1-K>0 everywhere.
From Theorem <ref> we easily get the following result.
The universal cover of the surface of revolution given by Ψ_C_1,C^∗_1 is ℝ^2 endowed with the metric g_C_1,C^∗_1. It is complete, 1-K>0 on ℝ^2 and, on an open dense subset, it is locally isometric to (D_C_1,g_C_1) and K≠ 0 at any point. Moreover any two surfaces (ℝ^2,g_C_1,C_1^∗) and (ℝ^2,g_C_1,C_1^∗') are isometric.
The second step is to construct effectively the biconservative immersion from (ℝ^2,g_C_1,C_1^∗) in 𝕊^3, or from S̃_C_1,C_1^∗ in 𝕊^3. The geometric ideea of the construction is the following: from each piece S^±_C_1,C_1^∗,c_1^∗ of S̃_C_1,C_1^∗ we “go back” to (D_C_1,g_C_1) and then, using ϕ_C_1 and a specific choice of + or - and of the constant c_1, we get our biconservative immersion Φ_C_1,C_1^∗. Again, the choices of + and -, and of the constant c_1 are uniquely determined (modulo 2π, for c_1) by the “first” choice of +, or of -, and of the constant c_1 (see <cit.> for all details).
Some numerical experiments suggest that Φ_C_1,C_1^∗ is not periodic and it has self-intersections along circles parallel to Ox^3x^4.
The projection of Φ_C_1,C_1^∗ on the Ox^1x^2 plane is a curve which lies in the annulus of radii √(1-1/(C_1ξ_01^2)) and √(1-1/(C_1ξ_02^2)). It has self-intersections and is dense in the annulus.
Concerning the biharmonic surfaces in 𝕊^3 we have the following classification result.
Let φ:M^2→𝕊^3 be a proper biharmonic surface. Then φ(M) is an open part of the small hypersphere 𝕊^2(1/√(2)).
§
In the c=0 case, the idea was to construct, by symmetry, a complete biconservative surface in ℝ^3 starting with a piece of a biconservative surface. We illustrate this in the following figure obtained for C_0=1.
[inner sep=0pt] (local) at (0,0)
< g r a p h i c s >
;
[inner sep=0pt] (global) at (10,0)
< g r a p h i c s >
;
[->,thin](local)–(global);
In the c=1 case, the construction of a complete biconservative surface in 𝕊^3 can be summarized in the next diagram, obtained for C_1=C_1^∗=1, c_1^∗=0 and we started with + in the expression of ν.
[inner sep=0pt] (manifold) at (3.6,0)
< g r a p h i c s >
;
(4.3,-2) node[below] (M^2,g);
[inner sep=0pt] (band) at (12,0)
< g r a p h i c s >
;
[->,>=triangle 45] (10,0) – (14,0);
[->,>=triangle 45] (10.5,-2.5) – (10.5,2.5);
(11.2,0) node[below] ξ_01;
(12.9,0) node[below] ξ_02;
(14.1,0) node[below] ξ;
(10.5,2.6) node[left] θ;
(12,-2.5) node[below] (D_C_1,g_C_1);
[<->,thick](manifold)– node[midway,fill=none,above] ISOMETRY(9.5,0);
[-,thick](11,-3.3)-| (11,-4);
[-,thick](11,-4)-| (13,-4);
[->,thick](13,-4) – (13,-8) node[midway,sloped,fill=none,below] ϕ_C_1=ϕ^±_C_1,c_1 node[midway,sloped,fill=none,above] BICONSERVATIVE;
[ellipse, fill=brown, inner sep=0.2in] at (13,-9.3)𝕊^3;
[<-,thick](10.7,-3.3)-| (10.7,-4);
[->,thick](10.7,-4) –(3.5,-4) node[midway,sloped,fill=none,below] ψ_C_1,C_1^∗=ψ^±_C_1,C_1^∗,c_1^∗ node[midway,sloped,fill=none,above] ISOMETRY ;
[inner sep=0pt] at (2,-4)
< g r a p h i c s >
;
(2,-5.5) node[below] S^±_C_1,C_1^∗,c_1^∗⊂ℝ^3;
[inner sep=0pt] (graph6) at (5.3,-9.3)
< g r a p h i c s >
;
(5.5,-10.2) node[below] S̃_C_1,C_1^∗⊂ℝ^3 complete;
[->,thick](2,-6.2)–(5,-8) node[pos=0.5,sloped, fill=none,above] playing with the node[pos=0.5,sloped,fill=none,below] constant c_1^∗ and ±;
[scale=2]
(A) at (4,-4.2) ;
(B) at (6.2,-4.1);
[blue] plot [smooth,tension=1]
coordinates (A) (2.4,-3.2) (2.8,-2.7) (5.2,-2.6) (5.7,-3.6) (B)
[arrow inside=end=stealth,opt=red,scale=20.2,0.4,0.75,0.999];
(6.5,-6.4) node[right] [ playing with the constant; c_1 and ± ];
The projection of Φ_1,1 on the Ox^1x^2 plane is represented in the next figure (c_1=0).
[xscale=1,yscale=1]
[inner sep=0pt] at (.2,0)
< g r a p h i c s >
;
[->,>=triangle 45] (-3,0) – (3.5,0);
[->,>=triangle 45] (0,-3) – (0,3);
(3.5,0) node[below] x^1;
(0,3) node[left] x^2;
The last two figures represent the signed curvature of the profile curve of S̃_C_1,C_1^∗ and the signed curvature of the curve obtained projecting Φ_1,1 on the Ox^1x^2 plane.
[xscale=1,yscale=1]
[inner sep=0pt] at (.4,0)
< g r a p h i c s >
;
[->,>=triangle 45] (-4.5,0.2) – (5,0.2);
[->,>=triangle 45] (0.85,-1.2) – (0.85,1.5);
(5,0.2) node[below] ν;
(0.85,1.5) node[left] κ;
[xscale=1,yscale=1]
[inner sep=0pt] at (.2,0.4)
< g r a p h i c s >
;
[->,>=triangle 45] (-4.5,-0.5) – (5,-0.5);
[->,>=triangle 45] (0.7,-1) – (0.7,2);
(5,-0.5) node[below] ν;
(0.7,2) node[left] κ;
99
BE P. Baird, J. Eells, A conservation law for harmonic maps, Geometry Symposium Utrecht 1980, 1–25, Lecture Notes in Math. 894, Springer, Berlin-New York, 1981.
BMO13 A. Balmuş, S. Montaldo, C. Oniciuc, Biharmonic PNMC submanifolds in spheres, Ark. Mat. 51 (2013), 197–221.
CMO02 R. Caddeo, S. Montaldo, C. Oniciuc, Biharmonic submanifolds in spheres, Israel J. Math. 130 (2002), 109–123.
CMO01 R. Caddeo, S. Montaldo, C. Oniciuc, Biharmonic submanifolds of 𝕊^3, Internat. J. Math. 12 (2001), 867–876.
CMOP R. Caddeo, S. Montaldo, C. Oniciuc, P. Piu, Surfaces in three-dimensional space forms with divergence-free stress-bienergy tensor, Ann. Mat. Pura Appl. (4) 193 (2014), 529–550.
C91 B-Y. Chen, Some open problems and conjectures on submanifolds of finte type, Soochow I. Math. 17 (1991), 169–188.
C84 B-Y. Chen, Total Mean Curvature and Submanifolds of Finite Type, Series in Pure Mathematics, 1. World Scientific Publishing Co., Singapore, 1984.
CI91 B-Y. Chen, S. Ishikawa, Biharmonic surfaces in pseudo-Euclidean spaces, Mem. Fac. Sci. Kyushu Univ. Ser. A. 45 (1991), 323–347.
FNO D. Fetcu, S. Nistor, C. Oniciuc, On biconservative surfaces in 3-dimensional space forms, Comm. Anal. Geom. (5) 24 (2016), 1027–1045.
FOP D. Fetcu, C. Oniciuc, A.L. Pinheiro, CMC biconservative surfaces in 𝕊^n×ℝ and ℍ^n×ℝ, J. Math. Anal. Appl. 425 (2015), 588–609.
Fu Y. Fu, Explicit classification of biconservative surfaces in Lorentz 3-space forms, Ann. Mat. Pura Appl.(4) 194 (2015), 805–822.
FT Y. Fu, N.C. Turgay, Complete classification of biconservative hypersurfaces with diagonalizable shape operator in Minkowski 4-space, Internat. J. Math. 27 (2016), 1650041, 17 pp.
HV95 Th. Hasanis, Th. Vlachos, Hypersurfaces in E^4 with harmonic mean curvature vector field, Math. Nachr. 172 (1995), 145–169.
H D. Hilbert, Die grundlagen der physik, Math. Ann. 92 (1924), 1–32.
GYJ G. Y. Jiang, 2-harmonic maps and their first and second variational formulas, Chinese Ann. Math. Ser. A7(4) (1986), 389–402.
J G. Y. Jiang, The conservation law for 2-harmonic maps between Riemannian manifolds, Acta Math. Sinica 30 (1987), 220–225.
LMO E. Loubeau, S. Montaldo, C. Oniciuc, The stress-energy tensor for biharmonic maps, Math. Z. 259 (2008), 503–524.
MOR16-2 S. Montaldo, C. Oniciuc, A. Ratto, Biconservative surfaces, J. Geom. Anal. 26 (2016), 313–329.
MOR16 S. Montaldo, C. Oniciuc, A. Ratto, Proper biconservative immersions into the Euclidean space, Ann. Mat. Pura Appl. (4) 195 (2016), 403–422.
MM A. Moroianu, S. Moroianu, Ricci surfaces, Ann. Sc. Norm. Super. Pisa Cl. Sci.(5) XIV (2015), 1093–1118.
N S. Nistor, Complete biconservative surfaces in ℝ^3 and 𝕊^3, J. Geom. Phys. 110 (2016) 130-153.
N-B17 S. Nistor, On biconservative surfaces, work in progress.
O02 C. Oniciuc, Biharmonic maps between Riemannian manifolds, An. Stiint. Univ. Al.I. Cuza Iasi Mat (N.S.) 48 (2002), 237–248.
OH C. Oniciuc, Biharmonic submanifolds in space forms, Habilitation Thesis (2012), 149 p.
O10 Y.-L. Ou, Biharmonic hypersurfaces in Riemannian manifolds, Pacific J. Math. 248 (2010), 217–232.
R G. Ricci-Curbastro, Sulla teoria intrinseca delle superficie ed in ispecie di quelle di 2^∘ grado, Ven. Ist. Atti (7) VI (1895), 445–488.
S83 A. Sanini, Applicazioni tra varietà riemanniane con energia critica rispetto a deformazioni di metriche, Rend. Mat. 3 (1983), 53–63.
S T. Sasahara, Tangentially biharmonic Lagrangian H-umbilical submanifolds in complex space forms, Abh. Math. Semin. Univ. Hambg. 85 (2015), 107–-123.
T15 N. C. Turgay, H-hypersurfaces with three distinct prinicipal curvatures in the Euclidean spaces, Ann. Math. 194 (2015), 1795–1807.
UT16 A. Updadhay, N. C. Turgay, A classification of biconservative hypersurfaces in a pseudo-Euclidean space, J. Math. Anal. Appl. 444 (2016), 1703–1720.
|
http://arxiv.org/abs/1701.07912v1 | 20170127003953 | An extension of the Hermite-Biehler theorem with application to polynomials with one positive root | [
"Richard Ellard",
"Helena Šmigoc"
] | math.CA | [
"math.CA",
"26C10, 93D20, 15A29"
] |
An extension of the Hermite-Biehler theorem]An extension of the Hermite-Biehler theorem with application to polynomials with one positive root
Richard Ellard,
School of Mathematics and Statistics,
University College Dublin,
Belfield, Dublin 4, Ireland
richardellard@gmail.com
The authors' work was supported by Science Foundation Ireland under Grant 11/RFP.1/MTH/3157.
Helena Šmigoc,
School of Mathematics and Statistics,
University College Dublin,
Belfield, Dublin 4, Ireland
helena.smigoc@ucd.ie
[2010]26C10, 93D20, 15A29
If a real polynomial f(x)=p(x^2)+xq(x^2) is Hurwitz stable (every root if f lies in the open left half-plane), then the Hermite-Biehler Theorem says that the polynomials p(-x^2) and q(-x^2) have interlacing real roots. We extend this result to general polynomials by giving a lower bound on the number of real roots of p(-x^2) and q(-x^2) and showing that these real roots interlace. This bound depends on the number of roots of f which lie in the left half plane. Another classical result in the theory of polynomials is Descartes' Rule of Signs, which bounds the number of positive roots of a polynomial in terms of the number of sign changes in its coefficients. We use our extension of the Hermite-Biehler Theorem to give an inverse rule of signs for polynomials with one positive root.
[
Helena Šmigoc
January 2017
=================
§ INTRODUCTION
Recall that a real polynomial f is called (Hurwitz) stable if every root of f lies in the open left half-plane. Determining the stability of real polynomials is of fundamental importance in the study of dynamical systems and as such, several equivalent characterisations have been given. One such characterisation is the Hermite-Biehler Theorem <cit.>, a proof of which can also be found in <cit.>. The Hermite-Biehler Theorem has been instrumental in the study of the “robust parametric stability problem”, that is, the problem of guaranteeing that stability is preserved by real coefficient perturbations (see <cit.>).
Let
f(x):=a_0x^n+a_1x^n-1+⋯+a_n
be a real polynomial and write f(x)=p(x^2)+xq(x^2), where p(x^2) and xq(x^2) are the components of f(x) made up by the even and odd powers of x, respectively. Let x_e1,x_e2,… denote the distinct nonnegative real roots of p(-x^2) and let x_o1,x_o2,… denote the distinct nonnegative real roots of q(-x^2), where both sequences are arranged in ascending order. Then f is stable if and only if the following conditions hold:
all of the roots of p(-x^2) and q(-x^2) are real and distinct;
a_0 and a_1 have the same sign;
0<x_e1<x_o1<x_e2<x_o2<⋯.
The Hermite-Biehler theorem says that, if f(x)=p(x^2)+xq(x^2) is stable, then the polynomials p(-x^2) and q(-x^2) have real, interlacing roots. In Section <ref>, we will extend the Hermite-Biehler Theorem by showing that, even if f is not stable (suppose f has n_- roots in the left half-plane and n_+ roots in the right), then it is still possible to give a lower bound on the number of real roots of p(-x^2) and q(-x^2). This bound is given in terms of the quantity |n_–n_+|. Furthermore, we show that these real roots interlace.
Another classical result in the theory of polynomials is Descartes' Rule of Signs. We say that a real polynomial
f(x)=a_0x^n+a_1x^n-1+a_2x^n-2+⋯+a_n, a_0≠0
has k sign changes if k sign changes occur between consecutive nonzero elements of the sequence a_0,a_1,…,a_n. Descartes' Rule of Signs states that the number of positive roots of f is either equal to k, or is less than k by an even number. Descartes' rule gives the exact number of positive roots in only two cases:
* f has no sign changes, in which case, f has no positive roots, or
* f has precisely one sign change, in which case, f has precisely one positive root.
Conversely to (i), if every root of f has real part less than or equal to zero, then f has no sign changes. To see this, we need only observe that, if the roots of f are labeled -η_1,-η_2,…,-η_s,-α_1± iβ_1,-α_2± iβ_2,…,-α_m± iβ_m, where η_j,α_j,β_j≥0 and s+2m=n, then the polynomial
1/a_0f(x)=∏_j=1^s(x+η_j)∏_j=1^m( (x+α_j)^2+β_j^2 )
has nonnegative coefficients, and consequently, every nonzero coefficient of f has the same sign.
In general, the converse of (ii) is not true; however, in Section <ref>, we will use our extension of the Hermite-Biehler Theorem to prove that, if f has at most one root with positive real part, then the sequences a_0,a_2,a_4,… and a_1,a_3,a_5,… each feature at most one sign change.
Polynomials with one positive root (in particular, inverse rules of signs for such polynomials) are of interest in a number of areas, such as in polynomial real root isolation, i.e. the process of finding a collection of intervals of the real line such that each interval contains precisely one real root and each real root is contained in some interval. Modern real root isolation algorithms typically use a version of Vincent's Theorem <cit.>, the proof of which depends on some kind of inverse rule of sign for polynomials with one positive root. For example, the proof of Vincent's Theorem given by Alesina and Galuzzi <cit.> uses a special case of a theorem of Obreschkoff <cit.>, which we state below:
<cit.>
If a real polynomial f of degree n has a simple positive root r and all other roots lie in the wedge
S_√(3):={-α+iβ : α>0, |β|≤√(3)α},
then f has precisely one sign change.
Polynomials with one positive root also arise in problems that consider the sign patterns of matrices (in particular, companion and related matrices). One such problem is the Nonnegative Inverse Eigenvalue Problem, or NIEP. This is the (still open) problem of characterising those lists of complex numbers which are realisable as the spectrum of some (entrywise) nonnegative matrix. Polynomials with one positive root are of particular importance in the NIEP, and as such, the NIEP has already motivated several results on the coefficients of polynomials of this type. In this context, the polynomial f represents the characteristic polynomial of the realising matrix and its one positive root represents the Perron eigenvalue of the realising matrix.
One of the earliest results in the NIEP was given by Suleǐmanova <cit.> when she proved the following:
<cit.>
Let σ:=(ρ,λ_2,λ_3,…,λ_n), where ρ≥0 and λ_i≤0: i=2,3,…,n. Then σ is the spectrum of a nonnegative matrix if and only if
ρ+λ_2+λ_3+⋯+λ_n≥0.
Perhaps the most elegant proof of Suleǐmanova's result is due to Perfect <cit.>, who showed that, under the assumptions of the theorem, every coefficient of the polynomial
f(x)=(x-ρ)∏_i=2^n(x-λ_i),
apart from the leading coefficient, is nonpositive, and hence, the companion matrix of f is nonnegative (note that, since Suleǐmanova's hypotheses guarantee the coefficient of x^n-1 in f is negative, the same result follows immediately from Theorem <ref>).
Later, Laffey and Šmigoc <cit.> generalised Suleǐmanova's theorem to complex lists with one positive element and n-1 elements with real part less than or equal to zero:
<cit.>
Let ρ≥0 and let λ_2,λ_3,…,λ_n be complex numbers such that
Re λ_i≤0 for all i=2,3,…,n. Then the list
σ:=(ρ,λ_2,λ_3,…,λ_n) is the spectrum of a nonnegative
matrix if and only if the following conditions hold:
σ is self-conjugate;
ρ+λ_2+λ_3+⋯+λ_n≥0;
(ρ+λ_2+λ_3+⋯+λ_n)^2≤ n(ρ^2+λ_2^2+λ_3^2+⋯+λ_n^2).
Furthermore, when the above conditions are satisified, σ may be realised by a matrix of the form
C+α I_n, where C is a nonnegative companion matrix with trace zero and α is a
nonnegative scalar.
The crucial ingredient in Laffey and Šmigoc's result was the following lemma (also proved by the authors):
Let (λ_2,λ_3,…,λ_n) be a self-conjugate list of complex numbers with nonpositive real parts, let ρ≥0 and let
f(x):=(x-ρ)∏_i=2^n(x-λ_i)=x^n+a_1x^n-1+a_2x^n-2+⋯+a_n.
If a_1,a_2≤0, then a_i≤0: i=3,4,…,n.
Although Lemma <ref> was motivated by matrix theory, it is, fundamentally, a result on the coefficients of real polynomials. We generalise this result in Section <ref>.
§ THE CAUCHY INDEX OF A RATIONAL FUNCTION
Let f(x) be a real rational function and let θ,ϕ∈ℝ∪{-∞,∞}, with θ<ϕ. The Cauchy index of f(x) between the limits θ and ϕ—written I_θ^ϕ f(x)—is defined as the number of times f(x) jumps from -∞ to ∞, minus the number of times f(x) jumps from ∞ to -∞, as x moves from θ to ϕ.
If
f(x)=1/(x+1)(x-1),
then I_-∞^0f(x)=-1, I_0^∞ f(x)=1 and I_-∞^∞ f(x)=0.
We introduce some additional notation: if f(x) is a complex-valued function and C is a contour in the complex plane, let Δ_Cf(x) denote the total increase in arg f(x) as x traverses the contour C. If C is the line segment from θ to ϕ, then we write Δ_θ^ϕ f(x).
The following result (and its proof) essentially appears in <cit.>. The proof is included for completeness.
Let f(x):=P(x)+iQ(x), where
P(x):=x^n+a_1x^n-1+a_2x^n-2+⋯+a_n
and
Q(x):=b_1x^n-1+b_2x^n-2+⋯+b_n
are real polynomials. Suppose f has n_+ roots with positive imaginary part, n_- roots with negative imaginary part and n_0 real roots (n_++n_-+n_0=n). Then
I_-∞^∞Q(x)/P(x)=n_–n_+.
We first consider the case when n_0=0. Define the closed contour C=C_1+C_2 (shown in Figure <ref>), where C_1 is the line segment from -R to R and C_2 is the semicircle
x(t)=Re^it : 0≤ t≤π.
Assume R is large enough so that all of the roots of f with positive imaginary part lie within the region enclosed by C.
->-/.style=decoration=
markings,
mark=at position #1 with >,postaction=decorate
Denote the roots of f by x_1,x_2,…,x_n. For each j=1,2,…,n, if Re(x_j)>0, then Δ_C(x-x_j)=2π. Otherwise, Δ_C(x-x_j)=0. Therefore
Δ_Cf(x)=Δ_C((a_0+ib_0)∏_j=1^n(x-x_j))=∑_j=1^nΔ_C(x-x_j)=2n_+π.
Similarly,
lim_R→∞Δ_C_2f(x)=nπ.
Hence
Δ_-∞^∞ f(x)=(2n_+-n)π;
however, since
arg f(x)=tan^-1Q(x)/P(x)
and
lim_x→±∞Q(x)/P(x)=0,
it follows that
1/πΔ_-∞^∞ f(x)=-I_-∞^∞Q(x)/P(x).
Combining (<ref>) and (<ref>) gives
I_-∞^∞Q(x)/P(x)=n-2n_+=n_–n_+,
as required.
Now consider the case when n_0>0. Let us label the real roots of f as η_1,η_2,…, η_n_0.
Writing
f(x) =( ∏_j=1^n_0(x-η_j) )f̃(x),
f̃(x) =P̃(x)+iQ̃(x),
we note that the polynomial f̃ has n_+ roots with positive imaginary part, n_- roots with negative imaginary part and no real roots. Hence, from the above,
I_-∞^∞Q̃(x)/P̃(x)=n_–n_+.
We note, however, that
P(x) =( ∏_j=1^n_0(x-η_j) )P̃(x),
Q(x) =( ∏_j=1^n_0(x-η_j) )Q̃(x)
and for all j=1,2,…,n_0,
lim_x→η_jQ(x)/P(x)=lim_x→η_jQ̃(x)/P̃(x).
Therefore
I_-∞^∞Q(x)/P(x)=I_-∞^∞Q̃(x)/P̃(x).
§ AN EXTENSION OF THE HERMITE-BIEHLER THEOREM
In this section, we consider an arbitrary real polynomial f(x)=p(x^2)+xq(x^2), with n_- roots in the left half-plane and n_+ roots in the right half-plane. We extend the Hermite-Biehler Theorem by giving a lower bound on the number of real roots of p(-x^2) and q(-x^2) in terms of |n_–n_+| and showing that these real roots interlace.
Let 𝒳 and 𝒵 be sequences of real numbers. We say 𝒳 and 𝒵 interlace if the following two conditions hold:
* if x_i and x_j are two distinct elements of 𝒳 with x_i<x_j, then there exists an element z_k of 𝒵 such that x_i≤ z_k≤ x_j (and vice versa);
* if x_i appears in 𝒳 with multiplicity m, then x_i appears in 𝒵 with multiplicity at least m-1 (and vice versa).
We say 𝒳 and 𝒵 strictly interlace if every element of 𝒳 and 𝒵 occurs with multiplicity 1, 𝒳 and 𝒵 have no element in common and whenever x_i and x_j are two distinct elements of 𝒳 with x_i<x_j, there exists an element z_k of 𝒵 such that x_i<z_k<x_j (and vice versa).
Before considering the real polynomial f(x)=p(x^2)+xq(x^2), it is easier (and more general) to first consider the complex polynomial f(x):=P(x)+iQ(x).
Consider the polynomial
f(x):=P(x)+iQ(x),
where
P(x) :=x^n+a_1x^n-1+a_2x^n-2+⋯+a_n,
Q(x) :=b_1x^n-1+b_2x^n-2+⋯+b_n
and the a_i and b_i are real. Suppose f has n_+ roots with positive imaginary part, n_- roots with negative imaginary part and n_0<n real roots (n_++n_-+n_0=n). If d:=n-2min{n_+,n_-}, then (counting multiplicities) there exist at least d real roots of P (say μ_1,μ_2,…,μ_d) and at least d-1 real roots of Q (say ν_1,ν_2,…,ν_d-1) such that
μ_1≤ν_1≤μ_2≤ν_2≤⋯≤ν_d-1≤μ_d.
If n_0=0, then the inequalities in (<ref>) may be assumed to be strict.
As in the proof of Theorem <ref>, we first consider the case when n_0=0. In this case, P and Q can have no real root in common, since if x_0 were a real root of both P and Q, then x_0 would also be a real root of f. Suppose also that n_->n_+.
Let p_1<p_2<⋯<p_s be the points on the real line at which Q(x)/P(x) jumps from -∞ to ∞ and let q_1<q_2<⋯<q_s' be the points on the real line at which Q(x)/P(x) jumps from ∞ to -∞. Clearly, the p_i and q_i are roots of P. Suppose they are arranged as follows:
⋯<p_k_j< p_k_j+1<⋯< p_k_j+1-1
<q_l_j< q_l_j+1<⋯< q_l_j+1-1
<p_k_j+1< p_k_j+1+1<⋯< p_k_j+2-1<⋯.
Now consider the interval R:=(p_k_j+r-1,p_k_j+r), where 1≤ r≤ k_j+1-k_j-1. By definition of the p_i,
lim_x→ p_r-1+k_j^+Q(x)/P(x)=∞, lim_x→ p_r+k_j^-Q(x)/P(x)=-∞.
Furthermore, although Q(x)/P(x) may have discontinuities in R (at points where P has a root of even multiplicity), Q(x)/P(x) does not change sign at these discontinuities. Hence Q(x)/P(x) has a root, say w_jr, in R. Obviously, w_jr is also a root of Q.
Let us now consider the sequence
𝒯:=( …,p_k_j,w_j1, p_k_j+1,w_j2,…,p_k_j+1-1,
p_k_j+1,w_j+1,1, p_k_j+1+1,w_j+1,2,…,p_k_j+2-1,… ).
This sequence consists of strictly interlacing roots of P and Q, apart from certain pairs of adjacent roots of P of the form (p_k_j+1-1,p_k_j+1). Hence, we form a new sequence 𝒯' from 𝒯 by deleting either p_k_j+1-1 or p_k_j+1 for each j. Since 𝒯' is a strictly interlacing sequence of real roots of P and Q, whose first and last entries are roots of P, it is sufficient to check that 𝒯' is sufficiently long.
Let h be the number of subsequences (q_l_j< q_l_j+1<⋯< q_l_j+1-1) which lie between p_1 and p_s. We note that 𝒯 has length 2s-h-1. Since 𝒯' was formed by deleting h elements from 𝒯, it follows that 𝒯' has length
2(s-h)-1≥2(s-s')-1=2I_-∞^∞Q(x)/P(x)-1.
By Theorem <ref>, it follows that 𝒯' has at least
2(n_–n_+)-1=2(n-2n_+)-1=2d-1
elements, as required.
We have yet to consider n_+≥ n_- or n_0>0. If n_0=0 and n_+=n_-, then the statement says nothing; hence we may ignore this case. If n_0=0 and n_+>n_-, then the proof is analogous to the above.
Finally, suppose n_0>0. Let us label the real roots of f as η_1,η_2,…, η_n_0. Writing
f(x)=( ∏_j=1^n_0(x-η_j) )( P̃(x)+iQ̃(x) ),
we note that the polynomial P̃(x)+iQ̃(x) has n_+ roots with positive imaginary part, n_- roots with negative imaginary part and no real roots. Hence, from the above, there exist d-n_0 real roots of P̃ (say μ_1,μ_2,…,μ_d-n_0) and d-n_0-1 real roots of Q̃ (say ν_1,ν_2, …,ν_d-n_0-1) such that
μ_1<ν_1<μ_2<ν_2<⋯<ν_d-n_0-1<μ_d-n_0.
All that remains is to note that the sequences
(μ_1,μ_2,…,μ_d-n_0,η_1,η_2,…,η_n_0)
and
(ν_1,ν_2,…,ν_d-n_0-1,η_1,η_2,…,η_n_0)
interlace (though not strictly).
As a consequence of Theorem <ref>, we obtain the following extension of the Hermite-Biehler theorem:
Consider the real polynomial
f(x):=x^n+a_1x^n-1+a_2x^n-2+⋯+a_n.
Suppose f has n_+ roots with positive real part, n_- roots with negative real part and n_0<n purely imaginary roots (n_++n_-+n_0=n). Let
P(x) :=x^n-a_2x^n-2+a_4x^n-4-⋯,
Q(x) :=a_1x^n-1-a_3x^n-3+a_5x^n-5-⋯
and d:=n-2min(n_+,n_-). Then (counting multiplicities) there exist at least d real roots of P (say μ_1,μ_2,…,μ_d) and at least d-1 real roots of Q (say ν_1,ν_2,…,ν_d-1) such that
μ_1≤ν_1≤μ_2≤ν_2≤⋯≤ν_d-1≤μ_d.
If n_0=0, then the inequalities in (<ref>) may be assumed to be strict.
The real parts of the roots of f correspond to the imaginary parts of the roots of the polynomial
g(x):=i^nf(-ix)=x^n+ia_1x^n-1-a_2x^n-2-ia_3x^n-3+⋯.
The result follows from Theorem <ref>.
Note that the bounds given for the number of real roots of P and Q in Corollary <ref> may or may not be achieved, as illustrated by the following two examples:
The polynomial
f(x):=x^5-x^4+3 x^3-4 x+1
has n_+=4 roots with positive real part and n_-=1 root with negative real part, so that, in the notation of Corollary <ref>, d=3. The polynomial P(x):=x^5-3 x^3-4 x has roots -2,0,2,i,-i and the polynomial Q(x)=-x^4+1 has roots -1,1,i,-i. Hence, in this example, the bounds given in Corollary <ref> on the numbers of real roots of P and Q is achieved.
Consider the polynomial
f(x):=x^4+2 x^3+23 x^2+94 x+130,
with roots 1±5i,-2± i. In the notation of Corollary <ref>, n_+=n_-=2 and d=0. Hence, the corollary does not guarantee the existence of any real roots of the polynomials
P(x):=x^4-23 x^2+130
or
Q(x):=2 x^3-94 x;
however, P has roots -√(13),-√(10),√(10),√(13) and Q has roots -√(47),0,√(47).
It turns out that, under certain circumstances, we can infer the existence of an additional two real roots of the polynomial Q given in (<ref>). We will use these additional roots in the next section.
Assume the hypotheses and conclusion of Theorem <ref> (alternatively Corollary <ref>).
If n_->n_+ and lim_x→-∞(P(x)/Q(x))=∞, or alternatively if n_-<n_+ and lim_x→-∞(P(x)/Q(x))=-∞, then there exists an additional real root ν_0 of Q such that ν_0≤μ_1.
If n_->n_+ and lim_x→∞(P(x)/Q(x))=-∞, or alternatively if n_-<n_+ and lim_x→∞(P(x)/Q(x))=∞, then there exists an additional real root ν_d of Q such that ν_d≥μ_d.
If n_0=0, then ν_0<μ_1 and ν_d>μ_d.
Assume the hypotheses and conclusion of Theorem <ref> (those of Corollary <ref> are equivalent). First suppose n_->n_+ and
lim_x→-∞P(x)/Q(x)=∞.
In the proof of Theorem <ref>, the first element p_1 of 𝒯' was chosen such that
lim_x→ p_1^-Q(x)/P(x)=-∞.
Hence, in this case, (<ref>) implies the existence of an additional real root w_0 of Q(x)/P(x) such that w_0<p_1. It follows that there exists an additional real root ν_0 of Q such that ν_0≤μ_1.
The remaining cases are dealt with similarly.
§ POLYNOMIALS WITH ONE POSITIVE ROOT
Using our extension of the Hermite-Biehler theorem, it will now be possible give an inverse rule of signs for real polynomials with one positive root. Later (in Theorem <ref>), we will show how this rule can be somewhat simplified, under some minor additional assumptions.
Consider the real polynomial
f(x):=x^n+a_1x^n-1+a_2x^n-2+⋯+a_n.
Suppose f has roots r,x_2,x_3,…,x_n, where r is real and Re(x_j)≤0: j=2,3,…,n. Then the sequence a_1,a_2,…,a_n satisfies the following conditions:
Let t be the largest integer such that a_2t≠0. Then either a_2j>0 for all j=1,2,…,t, or there exists s∈{1,2,…,t} such that
a_2j >0 : j=1,2,…,s-1,
a_2s ≤0,
a_2j <0 : j=s+1,s+2,…,t.
Let t' be the largest integer such that a_2t'-1≠0. Then either a_2j-1>0 for all j=1,2,…,t', or there exists s'∈{1,2,…,t'} such that
a_2j-1 >0 : j=1,2,…,s'-1,
a_2s'-1 ≤0,
a_2j-1 <0 : j=s'+1,s'+2,…,t'.
First suppose n is even and write n=2m. The polynomial
f(x)=x^2m+a_1x^2m-1+a_2x^2m-2+⋯+a_2m
has at most one root with positive real part. Therefore, by Corollary <ref>, the polynomial
x^2m-a_2x^2m-2+a_4x^2m-4-⋯+(-1)^ma_2m
has at least 2m-2 real roots. It follows that the polynomial
y^m-a_2y^m-1+a_4y^m-2-⋯+(-1)^ma_2m
has at least m-1 nonnegative roots. Let t be the largest integer such that a_2t≠0. Then the polynomial
y^t-a_2y^t-1+a_4y^t-2-⋯+(-1)^ta_2t
has at least t-1 positive roots. Therefore, by Descartes' rule of signs, the number of sign changes which occur between consecutive nonzero terms of the sequence
𝒯:=(1,-a_2,a_4,-a_6,…,(-1)^ta_2t)
is at least t-1. In particular, since 𝒯 contains t+1 elements, this implies at most one of the elements in 𝒯 is zero. There are now three cases to consider:
Case 1: If every element in 𝒯 is nonzero and 𝒯 has t sign changes, then a_2j>0 for each j=1,2,…,t.
Case 2: If every element in 𝒯 is nonzero and 𝒯 has t-1 sign changes, then the sequence
(1,a_2,a_4,…,a_2t)
has precisely one sign change.
Case 3: Suppose there exists s∈{1,2,…,t} such that a_2s=0. Then, removing a_2s from 𝒯, we obtain a sequence
𝒯_0:=(1,-a_2,a_4,…,(-1)^s-1a_2s-2,(-1)^s-1a_2s+2,…,(-1)^ta_2t)
with t elements (each nonzero) and t-1 sign changes. It follows that
a_2j >0 : j=1,2,…,s-1,
a_2j <0 : j=s+1,s+2,…,t.
We have now shown that the sequence a_2,a_4,… satisfies condition (i).
Similarly, by Corollary <ref>, the polynomial
a_1x^2m-1-a_3x^2m-3+a_5x^2m-5-⋯+(-1)^m-1a_2m-1x
has at least 2m-3 real roots, one of which is zero. It follows that the polynomial
a_1y^m-1-a_3y^m-2+a_5y^m-3-⋯+(-1)^m-1a_2m-1
has at least m-2 nonnegative roots. Let t' be the largest integer such that a_2t'-1≠0. Then the polynomial
a_1y^t'-1-a_3y^t'-2+a_5y^t'-3-⋯+(-1)^t'-1a_2t'-1
has at least t'-2 positive roots. Therefore, by Descartes' rule of signs, the number of sign changes which occur between consecutive nonzero terms of the sequence
𝒯':=(a_1,-a_3,a_5,…,(-1)^t-1a_2t'-1)
is at least t'-2. As above, this implies at most one of the elements in 𝒯' is zero.
If a_1>0, then the sequences 𝒯 and 𝒯' have the same properties. In this case, it follows from the above argument that the sequence a_1,a_3,… satisfies condition (ii).
If a_1<0, then for P(x) and Q(x) defined as in (<ref>), we see that
lim_x→-∞(P(x)/Q(x))=∞ and lim_x→∞(P(x)/Q(x))=-∞.
Hence, by Observation <ref>, every root of (<ref>) is real. It follows that (<ref>) has t'-1 positive roots and 𝒯' has t'-1 sign changes. Therefore a_2j-1<0 for all j=1,2,…,t'.
Finally, if a_1=0, then consider the polynomial
f_ϵ(x) :=(x-r-ϵ)∏_j=2^n(x-x_j)
=x^n-ϵ x^n-1+b_2x^n-2+b_3x^n-3+⋯+b_n,
where ϵ>0. From the above, we see that b_2j-1≤0: j=2,3,…, ⌈ n/2 ⌉. Furthermore, since each b_j depends continuously on ϵ and
lim_ϵ→0f_ϵ(x)=f(x),
it follows that a_2j-1≤0: j=2,3,…,⌈ n/2 ⌉. Since at most one of the elements in 𝒯' is zero, we conclude that a_2j-1<0 for all j=2,3,…,t'. We have now shown that the sequence a_1,a_3,… satisfies condition (ii).
The proof for odd n is similar.
With Corollary <ref> established, the proof of Theorem <ref> is quite elementary. Furthermore, the proof generalises to polynomials which have more than one root with positive real part: by combining Corollary <ref> with Descartes' Rule of Signs, bounds can be given on the number of sign changes which occur in the even/odd coefficients.
The statement of Theorem <ref> is somewhat complicated by the fact that the multiplicity of zero as a root of
x^n-a_2x^n-2+a_4x^n-4-⋯
may be different from the multiplicity of zero as a root of
a_1x^n-1-a_3x^n-3+a_5x^n-5-⋯
The following example illustrates this:
Let
f(x):=(x-r)g(x)=x^2m+2+a_1x^2m+1+a_2x^2m+⋯+a_2m+2,
where r>0 and
g(x):=(x+μ)∏_j=1^m(x^2+β_j^2) : μ,β_1,…,β_m>0.
The constant term in f is given by
a_2m+2=-rμβ_1^2β_2^2⋯β_m^2<0.
Hence, by Theorem <ref>, the sequence 𝒯_e:=(1,a_2,a_4,…,a_2m+2) of even coefficients features precisely one sign change and at most one element of 𝒯_e vanishes.
It is not difficult to verify that the odd coefficients of f are given by
a_2k+1=(μ-r)e_k(β_1^2,β_2^2,…,β_m^2) : k=0,1,…,m,
where e_k denotes the k-th elementary symmetric function. Therefore, the sign of every odd coefficient is determined by the sign of r-μ. In particular, if r=μ, then every odd coefficient vanishes.
It turns out that Example <ref> is essentially unique, in that, if f is not of this form and f(0)≠0, then a_k≤0 implies a_k+2,a_k+4,…<0. To establish this fact, we will require some inequalities from <cit.>, which are closely related to Newton's Inequalities:
<cit.>
Let
g(x):=∏_j=1^n(x-x_j)=x^n+b_1x^n-1+b_2x^n-2+⋯+b_n
be a real polynomial, where x_1,x_2,…,x_n are complex numbers with nonpositive real parts. If k and l have different parity, 1≤ k<l≤ n-1, then
b_kb_l≥ b_k-1b_l+1.
The case of equality in (<ref>) is not explicitly considered in <cit.>; however, by examining the proof, it is possible to characterise the equality case:
Assume the hypotheses of Theorem <ref>. If k is even and l is odd, then equality occurs in (<ref>) if and only if one of the following conditions holds:
zero is a root of g of multiplicity at least n-l+1;
Re (x_j)=0 for all j.
If k is odd and l is even, then equality occurs in (<ref>) if and only if (i) or (ii) holds, or g is of the form (<ref>).
We are now able to give a slightly more compact formulation of Theorem <ref>:
Let
f(x):=(x-r)∏_j=2^n(x-x_j)=x^n+a_1x^n-1+a_2x^n-2+⋯+a_n
be a real polynomial, where r>0 and x_2,x_3,…,x_n are nonzero complex numbers such that Re (x_j)≤0 for all j∈{2,3,…,n} and Re (x_j)<0 for some j∈{2,3,…,n}. Then, assuming ∏_j=2^n(x-x_j) is not of the form (<ref>), for each k∈{1,2,…,n-2}, a_k≤0 implies a_k+2<0.
Let us write f(x)=(x-r)g(x), where
g(x):=∏_j=2^n(x-x_j)=x^n-1+b_1x^n-2+b_2x^n-3+⋯+b_n-1
and let us define b_0:=1. Since a_n=-rb_n-1<0, we need only consider k≤ n-3.
Suppose (to the contrary) that there exists k∈{1,2,…,n-3} such that
a_k=b_k-rb_k-1≤0
and
a_k+2=b_k+2-rb_k+1≥0.
Combining (<ref>) and (<ref>) gives
b_kb_k+1≤ b_k-1b_k+2,
and so, by Theorem <ref>,
b_kb_k+1=b_k-1b_k+2,
which, by Observation <ref>, contradicts the hypotheses of the theorem.
We will illustrate Theorems <ref> and <ref> with an example:
Consider the polynomial
f(x):=(x-r)( (x+1)^2+β^2 )^m=x^2m+1+a_1x^2m+a_2x^2m-1+⋯+a_2m+1,
where r,β>0. We note that a_2m+1=-r(1+β^2)^m<0 and so f must have an odd number of sign changes. If β≤√(3), then by Theorem <ref>, f must have precisely one sign change. For larger values of β, we will see that f may have many changes, but by Theorem <ref>, the sequences
𝒯_e:=(1,a_2,a_4,…,a_2m)
and
𝒯_o:=(a_1,a_3,…,a_2m+1)
must each exhibit at most one sign change.
If β=√(2m+1) and r=1+1/m, it is not difficult to calculate that a_2m-1=a_2m=0, and in this case, Theorem <ref> implies a_k>0: k=1,2,…,2m-2, i.e. f has precisely one sign change. Keeping this value of β fixed, we may vary the location of the sign change by increasing r. In particular, with r=2m, we have a_1=a_2=0. We note that, with this value of β, the complex roots of f lie outside of the wedge (<ref>), illustrating the well-known fact that location in this wedge is sufficient, but not necessary, for the coefficients of the polynomial to exhibit one sign change. The fact that two adjacent coefficients of f can vanish simultaneously as r is varied indicates that this value of β is, in a sense, “critical”: if β were increased slightly beyond √(2m+1), it would be possible to find a value of r such that f has three sign changes.
Finally, let us consider an extreme case: if β=2m and
2m<r<2m+1/2m,
it is not difficult to check that a_1<0 and a_2m>0 (and hence f has the maximal possible number of sign changes).
amsplain
|
http://arxiv.org/abs/1701.07695v1 | 20170126133557 | Exponential Source/Channel Duality | [
"Sergey Tridenski",
"Ram Zamir"
] | cs.IT | [
"cs.IT",
"math.IT"
] | |
http://arxiv.org/abs/1701.07472v2 | 20170125202134 | The maximum number of cliques in graphs without long cycles | [
"Ruth Luo"
] | math.CO | [
"math.CO"
] |
The maximum number of cliques in graphs without long cycles
Ruth LuoUniversity of Illinois at Urbana–Champaign, Urbana, IL 61801, USA. E-mail: ruthluo2@illinois.edu. Research of this author
is supported in part by NSF grant DMS-1600592.
December 30, 2023
=====================================================================================================================================================================================
The Erdős–Gallai Theorem states that for k≥ 3 every graph on n vertices with more than 1/2(k-1)(n-1) edges contains a cycle of length at least k. Kopylov proved a strengthening of this result for 2-connected graphs with extremal examples H_n,k,t and H_n,k,2. In this note, we generalize the result of Kopylov to bound the number of s-cliques in a graph with circumference less than k. Furthermore, we show that the same extremal examples that maximize the number of edges also maximize the number of cliques of any fixed size. Finally, we obtain the extremal number of s-cliques in a graph with no path on k-vertices.
Mathematics Subject Classification: 05C35, 05C38.
Keywords: Turán problem, cycles, paths.
§ INTRODUCTION
In <cit.>, Erdős and Gallai determined ex(n, P_k), the maximum number of edges in an n-vertex graph that does not contain a copy of the path on k vertices, P_k. This result was a corollary of the following theorem:
Let G be an n-vertex graph with more than 1/2(k-1)(n-1) edges, k ≥ 3.
Then G contains a cycle of length at least k.
To obtain the result for paths, suppose G is an n-vertex graph with no copy of P_k. Add a new vertex v adjacent to all vertices in G, and let this new graph be G'. Then G' is an n+1-vertex graph with no cycle of length k+1 or longer, and so e(G)+ n = e(G') ≤1/2kn edges.
Let G be an n-vertex graph with more than 1/2(k-2)n edges, k≥ 2.
Then G contains a copy of P_k.
Both results are sharp with the following extremal examples: for Theorem <ref>, when k-2 divides n-1, take any connected n-vertex graph whose blocks (maximal connected subgraphs with no cut vertices) are cliques of order k-1. For Corollary <ref>, when k -1 divides n-1, take the n-vertex graph whose connected components are cliques of order k - 1.
There have been several alternate proofs and sharpenings of the Erdős-Gallai theorem
including results by Woodall <cit.>, Lewin <cit.>, Faudree and Schelp<cit.>, and Kopylov <cit.> – see <cit.> for further details.
The strongest version was that of Kopylov who improved the Erdős–Gallai bound for 2-connected graphs. To state the theorem, we first introduce the family of extremal graphs.
Fix k≥ 4, n ≥ k, k/2 > a≥ 1. Define the n-vertex graph H_n,k,a as follows.
The vertex set of H_n,k,a is partitioned into three sets A,B,C such that |A| = a, |B| = n - k + a and |C| = k - 2a
and the edge set of H_n,k,a consists of all edges between A and B together with all edges in A ∪ C.
Note that when a ≥ 2, H_n,k,a is 2-connected, has no cycle of length k or longer, and e(H_n,k,a) = k-a 2 + (n-k+a)a.
Definition. Let f_s(n,k, a):= k - a s + (n-k + a)a s-1, where f_2(n,k,a) = e(H_n,k,a).
By considering the second derivative, one can check that f_s(n,k, a) is convex in a in the domain [1, ⌊ (k-1)/2 ⌋], thus it attains its maximum at one of the endpoints a = 1 or a = ⌊ (k-1)/2 ⌋.
Let n ≥ k ≥ 5 and let t = ⌊k-1/2⌋. If G is a 2-connected n-vertex graph with
e(G) ≥max{f_2(n, k, 2), f_2(n, k, t)},
then either G has a cycle of length at least k, or G = H_n,k, 2, or G = H_n,k, t.
It is straight-forward to check that any 2-connected graph that is not a triangle has a cycle of length 4 or greater, and so the theorem covers all nontrivial choices of k. This theorem also implies Theorem <ref> by applying induction to each block of the graph.
We consider a generalized Turán-type problem. Fix graphs T and H, and define the function ex(n,T,H) to be the maximum number of (unlabeled) copies of T in an H-free graph on n vertices. When T = K_2, we have the usual extremal number ex(n,T,H) = ex(n, H).
There are many notable papers studying the ex(n,T,H) function for different combinations of T and H. Erdős <cit.> proved that for s ≤ r, among all n-vertex graphs that forbid K_r+1, the Turán graph (i.e., the balanced complete r-partite graph) maximizes the number of copies of K_s. Hatami, Hladký, Král', Norine, and Razborov <cit.> and independently Grzesik <cit.> proved ex(n, C_5, K_3) = (n/5)^5 whenever n is divisible by 5 using the method of flag algebras. On the other hand, Bollobás and Győri <cit.> proved (1+o(1)) 1/3√(3)n^3/2≤ ex(n, K_3, C_5) ≤ (1+o(1)) 5/4n^3/2, and later Győri and Li <cit.> proved an upper bound for ex(n, K_3, C_2k+1) in terms of ex(n, C_2k). This bound was improved by Füredi and Özkahya <cit.> and then later improved again by Alon and Shikhelman <cit.>. In the same paper, Alon and Shikhelman proved ex(n, K_s, K_r,t) = Θ(n^s-s 2/r) for certain values of r, s, and t, among other results.
Furthermore, such generalized Turán-type results for graphs can be instrumental for proving related extremal results in hypergraphs. For example, Füredi and Özkahya <cit.> used their upper bounds for the number of triangles in graphs without cycles of fixed lengths to give an upper bound for the number of hyperedges in 3-uniform hypergraphs without Berge-cycles of a fixed length.
In this note, we give an upper bound for the number of s-cliques in a graph without cycles of length k or greater (i.e., circumference less than k). We also obtain ex(n,K_s, P_k).
Definition. For s ≥ 2, let N_s(G) denote the number of unlabeled copies of K_s in G, e.g., N_2(G) = e(G).
Our main result is a generalization of Kopylov's result, Theorem <ref>. In particular, we show that the same extremal examples that maximize the number of edges among n-vertex 2-connected graphs with circumference less than k also maximize the number of cliques of any size. Our main results are the following:
Let n ≥ k ≥ 5 and let t = ⌊k-1/2⌋. If G is a 2-connected n-vertex graph with circumference less than k, then
N_s(G) ≤max{f_s(n, k, 2), f_s(n, k, t)}.
Again, this theorem is sharp with the same extremal examples H_n,k, 2 and H_n,k,t.
This theorem implies the cliques version of Theorem <ref>:
Let n ≥ k ≥ 4. If G is an n-vertex graph with circumference less than k, then
N_s(G) ≤n-1/k-2k-1 s.
Unlike the edges case, Theorem <ref> unfortunately does not easily imply ex(n, K_s, P_k). However, a Kopylov-style argument very similar to the proof of Theorem <ref> gives the result for paths.
Let n ≥ k≥ 4 and let G be an n-vertex connected graph with no path on k vertices. Let t = ⌊ (k - 2)/2 ⌋. Then N_s(G) ≤max{f_s(n, k-1, 1), f_s(n, k-1, t)}.
We have sharpness examples H_n, k-1, 1 and H_n,k-1, t. Finally, using induction on the number of components gives the following result:
ex(n, K_s, P_k) = n/k - 1k - 1 s.
And the same extremal examples as for Corollary <ref> apply.
The proofs for Corollary <ref>, Theorem <ref>, and Theorem <ref> are given in Section 3 of this paper. We first prove Theorem <ref>.
§ PROOF OF THEOREM <REF>
Let G be an edge-maximal counterexample. Then G is k-closed, i.e., adding any additional edge to G creates a cycle of length at least k. In particular, for any nonadjacent vertices x and y of G, there exists a path of at least k-1 edges between x and y. We will use the following lemma:
Let G be a 2-connected n-vertex graph with a path P of m edges with endpoints x and y. For v ∈ V(G), let d_P(v) = |N(v) ∩ V(P)|. Then G contains a cycle of length at least min{m+1, d_P(x) + d_P(y)}.
Our first goal is to show that G contains a large “core”, i.e., a subgraph with large minimum degree.
For this, we use the notion of disintegration.
Definition: For a natural number α and a graph G, the α-disintegration of
a graph G is the process of iteratively removing from G the vertices with degree
at most α until the resulting graph has minimum degree at least α + 1 or is empty.
This resulting subgraph H = H(G, α) will be called the (α+1)-core of G. It is well known
that H(G, α) is unique and does not depend on the order of vertex deletion (for instance, see <cit.>).
Let H(G, t) denote the (t+1)-core of G, i.e., the resulting graph of applying t-disintegration to G. We claim that
Suppose H(G,t) is empty. In the disintegration process, every time a vertex of degree at most t is removed, we delete at most t s-1 copies of K_s. For the last ℓ≤ t vertices, we remove at most ℓ-1 s-1 copies of K_s with each deletion. Thus
N_s(G) ≤ (n-t)t s-1 + t-1 s-1 + t-2 s-1 + … + 0 s-1
= (n-t)t s-1 + t s
= (n-(t+1))t s-1 + t+1 s
≤ f_s(n,k, t),
a contradiction.
Therefore H(G,t) is nonempty. Next we show that
If there exists a nonedge of H(G,t), then in G, there is a path of length at least k-1 edges with these vertices as its endpoints. Among all nonadjacent pairs of vertices in H(G,t), choose x,y such that there is a longest path P in G with endpoints x and y. By maximality of P, all neighbors of x in H(G,t) lie in P: if x has a neighbor x' ∈ H(G,t) - P, then either x'y ∈ E(G) and x'P is a cycle of length at least k, or x'y ∉ E(G) and so x'P is a longer path. Similar for y. Hence, by Lemma <ref>, G has a cycle of length at least min{k, d_P(x) + d_P(y)} = min{k, 2(t+1)} = k, a contradiction.
Now let r = |V(H(G,t)|. Each vertex in H(G,t) has degree at least t+1, so r ≥ t+2. Also, if r ≥ k -1, as G is 2-connected and H(G,t) is a clique, we can extend a path on r vertices of H(G,t) to a cycle of length at least r+1 ≥ k, a contradiction. Therefore t+2 ≤ r ≤ k - 2. In particular, 2 ≤ k - r ≤ t. Apply (k - r)-disintegration to G, and let H(G, k - r) be the resulting graph. Then H(G, t) ⊆ H(G, k - r).
If H(G, t) = H(G, k - r), then
N_s(G) ≤r s + (n-r)k - r s-1 = f_s(n, k, k - r) ≤max{f_s(n,k, 2),f_s(n,k,t)}
by the convexity of f_s.
Therefore, H(G, t) is a proper subgraph of H(G, k - r), and there must be a nonedge between a vertex in H(G,t) and a vertex in H(G, k - r). Among all such pairs, choose x ∈ H(G,t) and y ∈ H(G, k - r) to have a longest path P between them. As before, P contains at least k-1 edges, and each neighbor of x in H(G,t) and each neighbor of y in H(G, k-r) lie in P. Then G contains a cycle of length at least min{k, (r-1) + (k - r + 1)} = k, a contradiction.
§ PROOF OF COROLLARY <REF>, THEOREM <REF>, AND COROLLARY <REF>
Proof of Corollary <ref>.
Define g_s(n, k) = n-1/k-2k - 1 s and t = ⌊k - 1/2⌋. One can check that when n ≥ k,
g_s(n, k) ≥max{f_s(n,k, t), f_s(n, k, 2)}.
Fix a graph G on n vertices with circumference less than k. If G is disconnected, simply apply induction to each component of G to obtain the desired result. Therefore we may assume G is connected. We induct on the number of blocks of G. First suppose k ≥ 5. If G is a block, i.e., 2-connected, then either n ≤ k - 1, and so N_s(G) ≤|V(G)| s≤ g_s(n, k), or n ≥ k, and so by Theorem <ref>, N_s(G) ≤max{f_s(n,k, t), f_s(n, k, 2)}≤ g_s(n,k).
Otherwise, consider the block-cut tree of G—the tree whose vertices correspond to blocks of G such that two vertices in the tree are adjacent if and only if the corresponding blocks in G share a vertex. Let B_1 be a block in G corresponding to a leaf-vertex in the block-cut tree such that B_1 and its complement are connected by the cut vertex v. Set B_2 = G - B_1 + {v}. Apply the induction hypothesis to B_1 and B_2 to obtain
N_s(G) = N_s(B_1) + N_s(B_2) ≤ g_s(|B_1|, k) + g_s(n-|B_1|+1, k)
= |B_1|-1/k-2k-1 s + (n-|B_1|+1) - 1/k-2k-1 s
= g_s(n, k).
If k=4, then either G is a forest or G has circumference 3. In the second case, each block of G is either a triangle or an edge. Thus N_s(G) ≤ g_s(n,k) in both cases.
The proof of Theorem <ref> follows the same steps as the proof of Theorem <ref>. As some details here will be omitted to prevent repetition, it is advised that the reader first reads the proof of Theorem <ref>.
Proof of Theorem <ref>. Suppose for contradiction that N_s(G) > max{f_s(n, k-1, 1),f_s(n, k-1, t)} where t = ⌊ (k-2)/2⌋. Let G_0 be the graph obtained by adding a dominating vertex v_0 adjacent to all of V(G). Then G_0 is 2-connected, has n+1 vertices, and contains no cycle of length k + 1 or greater. Let G' be the k +1-closure of G_0 (i.e., add edges to G_0 until any additional edge creates a cycle of length at least k +1). Denote by N'_s(G') the number of K_s's in G' that do not contain v_0. Thus N'_s(G') ≥ N'_s(G_0) = N_s(G). Apply (t+1)-disintegration to G', where if necessary, we delete v_0 last. Let H(G', t+1) be the resulting graph of the disintegration. If H(G', t+1) is empty, then at the time of deletion each vertex has at most t neighbors that are not v_0. Hence
N'_s(G') ≤ (n-(t+1))t s-1 + t+1 s≤ f_s(n, k - 1, t),
a contradiction.
The same argument as in the proof of Theorem <ref> also shows that H(G', t+1) is a complete graph, otherwise there would be a cycle of length at least 2(t+2) ≥ (k -1)+2 in G'. Note that v_0 must be contained in H(G', t+1) as it is adjacent to all vertices in G'. Set |V(H(G', t+1))| = r where t+3 ≤ r ≤ k - 1 (and so k - r ≥ 1). In particular, (k + 1) - r ≤ t+1. Apply (k + 1 - r)-disintegration to G'. If H(G', t+1) ≠ H(G', k + 1 - r), then again we can find a cycle of length at least (r-1) + k + 2 - r = k +1. Otherwise, suppose H(G', t+1) = H(G', k + 1 - r). In H(G', t+1), the number of s-cliques that do not include v_0 is r-1 s, and in V(G) - V(H(G', k + 1 - r)), every vertex had at most k - r neighbors that were not v_0 at the time of its deletion. We have
N'_s(G') ≤ r-1 s + (n+1 - r) k - r s-1
= f_s(n, k - 1, k - r)
≤max{f_s(n, k - 1, 1),f_s(n,k-1,t)},
a contradiction.
Proof of Corollary <ref>.
Define h_s(n, k) = n/k -1k - 1 s, and note that when n ≥ k,
h_s(n, k) ≥max{f_s(n, k-1, t), f_s(n, k-1, 1)}.
We induct on the number of components in G. First suppose k ≥ 4. If G is connected, then either n ≤ k - 1, in which case N_s(G) ≤|V(G)| s≤ h_s(n,k), or n ≥ k and N_s(G) ≤max{ f_s(n, k-1, 1),f_s(n, k-1, t)}≤ h_s(n, k). Otherwise if G is not connected, let C_1 be a component of G. Then N_s(G) = N_s(C_1) + N_s(G - C_1) ≤ h_s(|C_1|, k) + h_s(n-|C_1|, k) = h_s(n,k).
If k=3 (the cases k ≤ 2 are not interesting), then the longest path in G has two vertices. It follows that G is the union of a matching and isolated vertices. Therefore N_s(G) ≤ h_s(n,k).
Acknowledgment. The author would like to thank Alexandr Kostochka, Zoltán Füredi, and Jacques Verstraëte for their guidance and for sharing their knowledge on this topic.
99
alonsh
N. Alon and C. Shikhelman,
Many T copies in H-free graphs,
J. Combin. Theory Ser. B. 121 (2016), 146–172.
bol B. Bollobás and E. Győri, Pentagons vs. triangles, Discrete Math. 308 (2008), 4332–4336
ErdP. Erdős, On the number of complete subgraphs contained in certain graphs, Magyar Tud. Akad.
Mat. Kut. Int. K˝ozl, 7 (1962), 459-–474.
ErdGal59
P. Erdős and T. Gallai,
On maximal paths and circuits of graphs,
Acta Math. Acad. Sci. Hungar. 10 (1959), 337–356.
FaudScheB
R. J. Faudree and R. H. Schelp, Ramsey type results,
Infinite and Finite Sets, Colloq. Math. J. Bolyai 10, (ed. A. Hajnal et al.), North-Holland,
Amsterdam, 1975, pp. 657–665.
FaudSche75
R. J. Faudree and R. H. Schelp,
Path Ramsey numbers in multicolorings,
J. Combin. Theory Ser. B. 19 (1975), 150–160.
furedi Z. Füredi and L. Özkahya, On 3-uniform hypergraphs without a cycle of a given length, Discrete Applied Mathematics
216, Part 3, (2017), 582-–588.
FS224
Z. Füredi and M. Simonovits,
The history of degenerate (bipartite) extremal graph problems,
Bolyai Math. Studies 25 pp. 169–264,
Erdős Centennial (L. Lovász, I. Ruzsa, and V. T. Sós, Eds.) Springer, 2013.
Also see: arXiv:1306.5167.
grz
A. Grzesik, On the maximum number of five-cycles in a triangle-free graph, J. Combin. Theory Ser. B. 102.5 (2012), 1061-1066.
GL E. Győri and H. Li, The maximum number of triangles in C_2k+1-free graphs, Combinatorics,
Probability and Computing 21(1-2), (2012), 187-191.
raz
H. Hatami, J. Hladký, D. Král', S. Norine, and A. Razborov, On the number of pentagons in triangle-free graphs, J. Combin. Theory Ser. A. 120 (2013) no. 3, 722–732.
Kopy
G. N. Kopylov,
Maximal paths and cycles in a graph,
Dokl. Akad. Nauk SSSR 234 (1977), 19–21.
(English translation: Soviet Math. Dokl. 18 (1977), no. 3, 593–596.)
Lewin
M. Lewin,
On maximal circuits in directed graphs,
J. Combin. Theory Ser. B. 18 (1975), 175–179.
core B. Pittel, J. Spencer, and N. Wormald, Sudden emergence of a giant k-core in a random graph, J. Combin. Theory Ser. B. 67 (1996), 111–151.
Woodall
D. R. Woodall,
Maximal circuits of graphs I,
Acta Math. Acad. Sci. Hungar. 28 (1976), 77–80.
|
http://arxiv.org/abs/1701.07468v1 | 20170125200538 | Symplectic resolutions for Higgs moduli spaces | [
"Andrea Tirelli"
] | math.AG | [
"math.AG",
"math.SG"
] |
In this paper, we study the algebraic symplectic geometry of the singular moduli spaces of Higgs bundles of degree 0 and rank n on a compact Riemann surface X of genus g. In particular, we prove that such moduli spaces are symplectic singularities, in the sense of Beauville <cit.>, and admit a projective symplectic resolution if and only if g=1 or (g, n)=(2,2). These results are an application of a recent paper by Bellamy and Schedler <cit.> via the so-called Isosingularity Theorem.
[
Georg Zetzsche
December 30, 2023
=====================
§ INTRODUCTION
In this paper we show how a recent result of Bellamy and Schedler <cit.> on symplectic resolutions of quiver and character varieties can be used to derive information on symplectic resolutions for the moduli space M_H(X, n) of semistable Higgs bundles of degree 0 and rank n on a compact Riemann surface X of genus g. In particular, we prove that for g > 1 and (g, n)≠ (2,2) the aforementioned moduli space does not admit such a resolution. On the other hand, we show that, in the case of elliptic curves, _H(X, n) does admit a symplectic resolution (note that in the case (g, n)=(2,2) such a resolution was constructed in <cit.>). For the proof of these results, a central tool is the so-called Isosingularity Theorem, proved by Simpson in the seminal paper <cit.>.
Higgs bundles and symplectic resolutions have become ubiquitous throughout algebraic and differential geometry, representation theory and mathematical physics. For instance, Higgs bundles, which first emerged thirty years ago in Nigel Hitchin’s study of the self-duality
equations on a Riemann surface <cit.> and in Carlos Simpson’s work on nonabelian Hodge theory, <cit.>, play a role in many different areas of mathematics, including gauge theory, Kähler and hyperkähler geometry, surface group representations, integrable systems, nonabelian Hodge theory, the
Deligne–Simpson problem on products of matrices, and (most recently) mirror symmetry and Langlands duality. On the other hand, the theory of (conical) symplectic resolutions has been widely studied not only in mathematics, but also in physics, and has applications and connections to representation theory, symplectic geometry, quantum cohomology, mirror symmetry, and equivariant cohomology. For a survey on some of these connections, see, e.g., <cit.> and <cit.>.
Symplectic resolutions for Higgs bundles have been considered by Kiem and Yoo in <cit.>: in their work they prove that _H(X, 2) admits a symplectic desingularization if and only if g=2.
As a special case of our main theorem, we obtain a new short proof of the aforementioned result form <cit.> that there is no symplectic resolution of _H(X, n) when n=2 and g≥ 3.
The paper is organized as follows: in Section <ref> we recall some well known results on Higgs bundles and their moduli spaces and give the relevant definitions of symplectic singularities and symplectic resolutions; then, in Section <ref>, we recall Simpson's Isosingularity Theorem, which is the central result needed for our proof; in Section <ref>, we state the theorem of <cit.> on the non-existence of symplectic resolutions for a certain class of character varieties and explain how one can formulate and prove an analogous statement for the moduli space of Higgs bundles. Moreover, in addition to the (g, n)=(2,2) case, for which we already know from <cit.> the existence of a symplectic resolution, we prove that, in the elliptic curve case, one has a symplectic resolution.
[Theorems <ref>, <ref> and <ref> below]The following holds true:
(A)The moduli spaces _H(X, n) are symplectic singularities.
(B) They admit projective symplectic resolutions exactly in the cases g=1 and (g, n)=(2,2).
Part (A) of the above result is proved by using Namikawa's criterion <cit.>, Simpson's Isosingularity Theorem (Theorem <ref>) and the hyperkähler structure on the moduli space of stable Higgs bundles. Furthermore, part (B) follows from a combination of aforementioned Isosingularity Theorem and (a formal analogue of) one of the main results in <cit.>, (Theorem <ref>). Finally, to consider the elliptic curve case, a result of Franco <cit.> gives a clear geometric description of the moduli space _H(X, n), when X is an elliptic curve, which allows us to see that _H(X, n) does admit a symplectic resolution (Theorem <ref>).
§.§ Acknowledgements
The author wishes to thank his PhD supervisor Dr Travis Schedler for the support and guidance given in studying the subject of the present article, Laura Schaposnik for useful comments on a first draft of this paper and Emilio Franco, Indranil Biswas, Marina Logares, Tamas Hausel, Ben Davison and Richard Wentworth for useful conversations on this project. This work was supported by the Engineering and Physical Sciences Research Council [EP/L015234/1],
The EPSRC Centre for Doctoral Training in Geometry and Number Theory (The London School of Geometry and Number Theory), Imperial College London and University College London.
§ CHARACTER VARIETIES, HIGGS BUNDLES AND SYMPLECTIC RESOLUTIONS
In this section we will give the main definitions and results to fix the set up of the paper. For the sake of brevity, we will not give the proofs of any of the statements mentioned, for which we shall refer the reader to the relevant references.
§.§ Character varieties
Let X be a smooth complex projective curve of genus g, and let G be a reductive algebraic group over C. Consider the space Y_G=Hom(π_1(X), G) of homomorphisms from the fundamental group of X to the group G. Using the presentation by generators and relations of π_1(X), we can give Y_G the structure of affine variety: indeed, we know that
π_1(X)≅⟨ a_1, …, a_g, b_1,…, b_g⟩/R,
where R is the relation R=∏_i=1^g[a_i, b_i] and [a, b] denotes the commutator [a, b]:=aba^-1b^-1. Then, we can embed Y_G into G^2g as follows:
Y_G→ G^2g, ρ↦ (ρ(a_1), …, ρ(b_g)),
which is equivalent to considering Y_G as the subvariety of G^2g cut out by the equation
∏_i=1^g[A_i, B_i]=1,
for A_i, B_i ∈ G, i=1, …, g.
Since the group G acts by conjugation on G and one may define the G-character variety of X as the categorical quotient <cit.>
X(g, G)=Y_G G.
Algebraically, this is just
X(g, G)=Spec(C[Y_G]^G),
the spectrum of the ring of G-invariant functions on Y_G.
In the notation X(g, G) we omitted X since the character variety depends only on the topology of X, i.e. on its genus, and not on the complex structure.
Despite their simple definition, character varieties have a very rich geometry and have been the subject of a large body of literature. For the purposes of this paper, we will be interested in the case where G=GL(n, C) and we will use the notation X(g, n) for the character variety X(g, GL(n, C)), where X is a compact Riemann surface of genus g.
§.§ Higgs bundles
In what follows we shall give a brief overview of Higgs bundles and their moduli spaces. Our main references are <cit.> and the seminal papers of Hitchin <cit.> and Simpson <cit.>.
A Higgs bundle on X is a pair (E, Φ), where E is a holomorphic vector bundle on X and Φ, the Higgs field, is an End(E)-valued 1-form on X, i.e. Φ∈ H^0(End(E)⊗Ω_X^1).
In order to have a moduli space with a meaningful geometric structure, one has to consider bundles of fixed rank and degree which satisfy the following stability condition.
A Higgs bundle (E, Φ) on X is semistable if for any subbundle F of E such that Φ(F)⊂ F⊗Ω_X^1, one has
μ(F)≤μ(E),
where μ(E):=deg(E)/rank(E) is the slope of a vector bundle. The Higgs bundle (E, ϕ) is said to be stable if the strict inequality holds. Moreover, (E, ϕ) is said to be polystable if it is either stable or a direct sum of stable Higgs bundles with the same slope.
Explaining the origin of the notion of Higgs bundles and the use of such terminology, first introduced by Hitchin, is beyond of the scope of this paper, and thus we would like only to highlight that Higgs bundles were defined in the context of the study of certain self-duality equations on a Riemann surface. The interested reader may wish to refer to the original paper <cit.> and to the references mentioned in the introduction for further details.
As mentioned before, imposing the (semi)stability condition makes it possible to have a moduli space that has sufficiently nice properties. The construction of such a moduli space was firstly carried out by Hitchin, in the rank 2 case <cit.>, and generalized to arbitrary rank by Nitsure <cit.>, via the use of Geometric Invariant Theory. We summarise this in the following theorem, which describes the structure of the moduli space in the case when the g(X)≥ 2.
<cit.> Let g≥ 2 be an integer and let M_H(n, d) be the set of S-equivalence classes of semistable Higgs bundles of rank n and degree d on a smooth projective curve X of genus g. Then, M_H(X, n, d) is a quasi-projective variety, which contains the moduli space of stable Higgs bundles M^s_H(X, n, d) as an open smooth subvariety.
For the sake of conciseness we omit the definition of S-equivalence between Higgs bundles, for which we refer the reader to <cit.>. What will be important for us is that, in an S-equivalence class of semistable Higgs bundles, there is only one (up to isomorphism) polystable Higgs bundle; see <cit.> for a proof in the degree 0 case. Thus, at least for d=0, we may think of _H(X, n, d) as the moduli space of isomorphism classes of polystable Higgs bundles of rank n and degree d.
In what follows we will use the notation _H(X, n) for the moduli space _H(X, n, 0). M_H(X, n) is precisely the object of study of this paper, and in Section <ref> we will characterize the singularities of such a quasi-projective variety and determine when it admits a symplectic resolution. For this, we shall first recall some basic facts about symplectic resolutions.
§.§ Symplectic resolutions
The theory of symplectic singularities and symplectic resolutions was first defined by Beauville in <cit.> and since then it has seen a tremendous development, see e.g. <cit.>.
Let Y be a normal algebraic variety over C. Then, we say that Y is a symplectic singularity if the smooth locus of X, U=Y∖ Y^sing, carries a holomorphic symplectic 2-form ω_U such that, for every resolution of singularities ρ:Ỹ→ Y the pull-back ρ^*ω_Y_U extends to a holomorphic 2-form on Ỹ.
One could alternatively define symplectic singularities by requiring the existence of a resolution of singularities that satisfies the condition of the above definition. It turns out that this is equivalent to requiring the pull-back of the symplectic form to extend for every resolution of singularities.
Given a symplectic singularity (X, ω_U), we say that a resolution ρ:X̃→ X is symplectic if the extension of ρ^*ω_U is a holomorphic symplectic 2-form.
The reader should refer to <cit.>, for a list of examples of symplectic singularities, symplectic resolutions and an account on symplectic singularities which do not admit symplectic resolutions. In this context, Section <ref> provides a further example of a symplectic singularity which does not admit a symplectic resolution. For related examples, see, e.g., <cit.> on quotient singularities.
Note that, following Beauville <cit.>, we define symplectic singularities using holomorphic 2-forms. One can also define them requiring the symplectic form to be algebraic, and it appears that the symplectic structures defined on the moduli spaces under consideration are indeed algebraic.
§ THE ISOSINGULARITY THEOREM
In this section, we will state one of the two crucial results needed in the proof of our main result, known as the Isosingularity Theorem, and proved by Simpson in his seminal paper <cit.>, where the general version of the nonabelian Hodge correspondence is given. As suggested by the name, the nonabelian Hodge correspondence can be thought of as a nonabelian version of the well known Hodge Theorem, which gives an isomorphism between the de Rham cohomology H^n_dR(X, C) and the Dolbeaut cohomology ⊕_p+q=nH^q(X,Ω^p) of a compact Kähler manifold X. Putting this result together with the classical de Rham Theorem, one obtains isomorphisms:
H_B^n(X, C)≅ H^n_dR(X, C)≅⊕_p+q=nH^q(X, Ω^p).
In the nonabelian Hodge correspondence, C is substituted by a complex algebraic group G and the above cohomology spaces are replaced by the so-called Betti, de Rham and Higgs moduli spaces respectively, which all have a much richer geometric structure than their abelian counterparts. More explicitly, in the case of G=GL(n, C), one considers the following spaces.
* The Betti moduli space M_B(X, n): the space of representations π_1(X)→ GL(n, C) modulo the conjugation action of G, which is also known as the character variety X(g, n), recalled in Section <ref>;
* The de Rham moduli space M_dR(X, n): the moduli space of flat rank n vector bundles on X;
* The Higgs moduli space M_H(X, n): the moduli space of semistable Higgs bundles of degree 0 and rank n, which we recalled in Section <ref>.
From the work of Hitchin <cit.>, Donaldson <cit.>, Corlette <cit.> and Simpson <cit.>, we know that there are isomorphisms of sets of points
M_B(X, n)≅M_dR(X, n)≅M_H(X, n). ⋆
Moreover, the nonabelian Hodge correspondence states that much more is true: indeed, the following is a consequence of results in <cit.>.
Denote by ϕ: M_B(X, n)→M_dR(X, n) and ψ:M_dR(X, n)→M_H(X, n) the bijections (⋆). Then, one has that
(1) ϕ in an isomorphism of the associated complex analytic spaces;
(2) ψ is a homeomorphism of topological spaces.
The fact that Theorem <ref>. (1) is not just a set-theoretic bijection, but an isomorphism of complex analytic spaces, is a key ingredient in the proof of our main result, as it enables us to transfer the formal isomorphism given by the Isosingularity theorem to another formal isomorphism between different spaces. This will become clear in Section <ref> below.
In what follows we shall state the Isosingularity theorem in a form that is slightly different from the original statement, but which is equivalent and more suitable for our purposes. The reader should refer to <cit.> for a proof of these results.
For any point x∈_deR(X, n) there is a canonical isomorphism between the formal completions of M_dR(X, n) at x and M_H(X, n) at ψ(x).
One can deduce an important corollary from the above theorem using a result of Artin (<cit.>): indeed, using this result one can prove that Theorem <ref> implies that _H(X, n) and _B(X, n) are locally étale isomorphic at corresponding points. We will make use of this consequence later in the paper when studying the singularities of _H(X, n).
As a consequence, one can relate formal completions of the spaces _B(X, n) and _H(X, n) at corresponding points. To this end, one needs the following result, which relates formal and analytic completions of a variety at a point. The reader should refer to <cit.> for a proof of this result and a detailed treatment of the relations between the complex algebraic and the analytic points of view.
Let V be an algebraic variety and let V^an denote the space V considered as a complex analytic space. For x a point in V, there is an isomorphism of locally ringed spaces
V_x≅V_x^an.
With the above proposition at hand, one can prove the following theorem, which is the crucial tool to transfer the results about symplectic resolutions from the context of character varieties to that of Higgs bundle moduli spaces.
There is an isomorphism between the formal completions of the spaces M_B(X, n) and _H(X, n) at corresponding points.
The result is a consequence of Theorem <ref>, Proposition <ref> and part (1) of Theorem <ref>. Indeed, for x a point in M_B(X, n) one has the following chain of isomorphisms
M_B(X, n)_x≅M_B(X, n)_x^an≅M_dR(X, n)_x'^an≅M_dR(X, n)_x'≅M_H(X, n)_x”,
where x'=ϕ(x) and x”=ψ(x') and the first and the third isomorphisms come from Proposition <ref>, the second is a consequence of Theorem <ref>, and the fourth is precisely the Isosingularity Theorem.
§ SYMPLECTIC RESOLUTIONS: FROM CHARACTER VARIETIES TO HIGGS BUNDLES
§.§ Proof of the main result
We first recall results of Bellamy and Schedler <cit.> about the nature of the singularities of X(g, n) and the existence of symplectic resolutions. Then, via Theorem <ref>, we prove that the analogous result holds for the variety M_H(X, n). We will use Simpson's notation M_B(X, n) for the character variety X(g, n).
<cit.> The Poisson variety M_B(X, n) is a symplectic singularity.
<cit.> Suppose g>1 and (g, n)≠ (2,2). Then, the symplectic singularity M_B(X, n) does not admit a symplectic resolution.
Although we will not prove the above results, one should note that two different strategies are proposed in <cit.> for their proofs:
(A) Since a symplectic resolution is a crepant resolution, to prove that the former can not exist, it suffices to prove that the latter does not exist. To this end, it is well-known that if Y is a normal variety which is factorial and has terminal singularities, then Y does not admit a crepant resolution. In <cit.> it is shown that _B(X, n) has these properties under the assumptions of Theorem <ref>;
(B) one may also prove Theorem <ref> by noting that if a symplectic resolution exists, then the same is true for the formal (or étale) neighbourhood at every point. This gives an alternative proof of the above result because in <cit.> it is pointed out that the formal neighbourhood of (0, …, 0) in the SL(n, C)-character variety is isomorphic to the formal neighbourhood of a certain quiver variety, which in turn, from the proof of <cit.>, does not admit a symplectic resolution when g>1 and (n, g)≠ (2, 2).
To prove Theorem <ref>, we will adopt strategy (B) via the use of Theorem <ref>, and then apply strategy (A) étale locally.
<cit.> Let Y be a complex algebraic variety. Then Y is a symplectic singularity if and only if Y has rational Gorenstein singularities and the regular locus U of Y admits an everywhere non-degenerate holomorphic closed 2-form.
We now recall an important property of the moduli space _H(X, n), proved by Simpson in <cit.>.
<cit.> The moduli space _H(X, n) is irreducible and of dimension 2n^2(g-1)+2.
From the above, one can prove a key fact about the singularities of the moduli space M_H(X, n). In order to do this, we will need the following result, which gives an estimate on the codimension of the strictly semistable locus of the moduli space _H(X, n), i.e. the locus given by Higgs bundles which are semistable but not stable, _H^stps(X, n)=_H(X, n)∖_H^s(X, n). By the results mentioned in Remark <ref>, _H^stps(X, n) coincides with the locus of strictly polystable Higgs bundles, hence the notation _H^stps(X, n).
The following holds true:
codim(_H^stps(X, n))≥ 2.
In this proof we will use the shortened notations _n and _n^stps for _H(X, n) and _H^stps(X, n), respectively. The first step is to give an explicit description of the locus ^stps: from Definition <ref>, a polystable Higgs bundle which is not stable is the direct sum stable Higgs bundles. Therefore, for any partition n=(n_1, …, n_k) of n there is a set-theoretic map
ν_n:_n:=^s_n_1×…×^s_n_k→_n, ((E_1, ϕ_1), …, (E_k, ϕ_k))↦(⊕ E_i, ⊕ϕ_i).
Note that this map ν_n is algebraic: this follows from the fact that taking direct sums is functorial and well-defined on isomorphism classes of Higgs bundles. Up to the action of a permutation group on the image, the map ν_n is injective so that
(_n)=Im(ν_n).
Moreover, it is clear that
_n^stps=⋃_n∈P(n)Im(ν_n),
here P(n) denotes the set of partitions of n. Therefore, the following holds true:
_n^stps=max_n∈P(n){_n}.
Let n=(n_1, …, n_k)∈P(n) be a partition such that the maximum above is attained and let k=l(n) be its length: note that 1≤ k≤ n. Then, be above equality can be written as
_n^stps= ∑_i=1^k(2n_i^2(g-1)+2).
The desired estimate can be written as
_n^stps≤_n-2,
which, using the calculation above, is
∑_i=1^k(2n_i^2(g-1)+2)=2(g-1)(∑_i=1^kn_i^2)+2k≤ 2(g-1)(∑_i=1^kn_i)^2,
but this is clearly seen to hold true and, thus, the proof is concluded.
Assume that g(X)≥ 2. Then, the moduli space _H(X, n) is a symplectic singularity.
In order to prove the theorem one needs to verify that the hypotheses of Proposition <ref> are satisfied. To this end, one needs to prove that _H(X, n) is normal, its singularities are rational Gorenstein, and the smooth locus of _H(X, n) admits a holomorphic symplectic form. The first two properties are proved by noting that being normal and rational Gorenstein are étale local properties and, thus, by the Isosingularity Theorem, _H(X, n) is normal and has rational Gorenstein singularities if and only if the same holds for _B(X, n): but this is true from Theorem <ref>. For the third part, it is a well known result (<cit.>, see also <cit.>) that _H^s(X, n) admits a hyperkähler structure and, thus, a holomorphic symplectic structure (this is true, more generally, for Higgs bundles of arbitrary degree d). Such a holomorphic symplectic structure can be extended to the smooth locus: indeed, by Lemma <ref> we know that the codimension of the complement of the stable locus inside the smooth locus is at least 2, and the stable locus is dense in the smooth locus since the latter is irreducible (by Proposition <ref>). Therefore, all the hypotheses of Proposition <ref> are satisfied and the theorem is proved.
We are now ready to prove the main result of the paper.
When g>1 and (g, n)≠ (2,2), the moduli space M_H(X, n) does not admit a symplectic resolution.
Suppose by contradiction that when g>1 and (g, n)≠ (2,2) such a resolution ρ:→_H(X, n) exists and let x be the point in _H(X, n) that corresponds to the trivial representation, call it Id, in the Betti moduli space _B(X, n) via the homeomorphism given by Theorem <ref>. (2). Then, by the Isosingularity Theorem, there is an étale neighbourhood V of x in _H(X, n) isomorphic to an étale neighbourhood U of Id in _B(X, n). Furthermore, by assumption, via this étale local isomorphism, from the resolution ρ, one can construct a symplectic resolution ρ̃ of U. But U is factorial and terminal: factoriality of U follows from the proof of <cit.>; on the other hand, the fact that U has terminal singularities follows from <cit.>. Since this is a contradiction, the theorem follows.
Given the above theorem and the resolution constructed in the case (g, n)=(2, 2) in <cit.>, the only case left to consider is that of elliptic curves. Recall that, from <cit.>, it is possible to show that, for X an elliptic curve, the character variety X(g, n) does admit a symplectic resolution, (see <cit.>). Moreover, an analogous result can be obtained for the moduli space _H(X, n) using a result shown in E. Franco's PhD Thesis <cit.> (see also <cit.>), which relies on techniques from <cit.>.
<cit.> Consider the moduli space _H(X, n, d), where X is an elliptic curve, and let h=gcd(n, d). Then, there exists an isomorphism
α_n, d: Sym^hT^*X→_H(X, n, d).
With such an explicit geometric description of _H(X, n, d), one can prove the following result.
Let X be an elliptic curve. Then, the moduli space _H(X, n) is a symplectic singularity and it admits a symplectic resolution
Hilb^n T^*X⟶_H(X, n).
We know from the Theorem <ref> that the moduli space _H(X, n) is isomorphic to the n-th symmetric power of the cotangent bundle to the elliptic curve X. Via this isomorphism, we can induce a (generic) symplectic structure on _H(X,n) so that, by definition, the isomorphism α_n, 0 is, generically, a symplectomorphism. Moreover, it is a well-known fact (<cit.>) that, given a smooth symplectic surface S, for any n≥ 1, the variety Sym^nS is a symplectic singularity and there exists a symplectic resolution
Hilb^nS→Sym^nS.
Therefore, setting S=T^*X the theorem follows.
E. Franco has pointed out to us that the map α_n, d is actually a symplectomorphism over the stable locus, using the hyperkähler structure on _H(X, n), so that the generic symplectic structure constructed in Theorem <ref> is the usual one.
§.§ Future directions
A natural question that one may want to address is whether there are analogous results in the context of twisted character varieties and moduli spaces of semistable Higgs bundles of arbitrary degree. The former varieties are defined as follows: taking g, n and d to be non-negative integers, the twisted character variety X(g, n, d) is
X(g, n, d):={ (A_1, …, A_g, B_1, …, B_g) ∈ GL(n, C) | ∏_i=1^g[A_i, B_i]=e^2π i d/nI}.
In this case, much less is known: indeed, it is not even clear whether the formal non-abelian Hodge correspondence holds true. Moreover, it is not known whether the twist introduced in the previous definition affects the nature of the singularities. To prove the analogue of Theorem <ref>, due to the lack of an Isosingularity theorem in this case, it may be better to use strategy (A) outlined above. On the other hand, it is an interesting open question if such a Isosingularity statement holds in this more general setting. A detailed study will appear in future work.
|
http://arxiv.org/abs/1701.07497v2 | 20170125214414 | A Type B Analogue to Ribbon Tableaux | [
"Ezgi Kantarcı Oğuz"
] | math.CO | [
"math.CO"
] |
Department of Mathematics
University of Southern California
Los Angeles, CA
kantarci@usc.edu
We introduce a shifted analogue of the ribbon tableaux defined by James and Kerber <cit.>. For any positive integer k, we give a bijection between the k-ribbon fillings of a shifted shape and regular fillings of a ⌊ k/2⌋-tuple of shapes called its k-quotient. We define the corresponding generating functions, and prove that they are symmetric, Schur positive and Schur Q-positive. Then we introduce a Schur Q-positive q-refinement.
Nitrogen Fractionation in protoplanetary disks from the / ratio
Ezgi Kantarcı Oğuz
Received: date / Accepted: date
===============================================================
§ INTRODUCTION
The study of ribbon tableaux on shifted shapes combines two existing areas of work: the theory of ribbon tableaux and Schur's Q-functions. Ribbon tableaux introduced by James and Kerber<cit.> have applications to the representations of the symmetric group over a field of finite characteristic. Their theory was extend to the LLT polynomials by Lascoux, Leclerc and Thibon which arise in the Fock space representation of the universal enveloping algebra of quantum affine 𝔰𝔩_n<cit.>. An expansion of Macdonald polynomials in terms of the LLT polynomials is given in <cit.>, and many other important symmetric functions have natural expansions into LLT polynomials.
Schur's Q-functions come up as the symmetric functions that correspond to the shifted diagrams. They have a connection to the irreducible spin characters of the symmetric group, analogous to that of Schur functions and irreducible characters of linear representations<cit.>. Since their introduction in <cit.>, applications to diverse mathematical fields have been discovered, including the
cohomology of isotropic Grassmannians <cit.> and polynomial solutions to the BKP equation in hydrodynamics.
In this work, we are merging these two ideas to initiate a combinatorial theory of ribbon tableaux for shifted shapes. The k-quotients and k-cores for shifted shapes were previously studied by Morris and Yaseen in 1986<cit.>. We expand upon their work, reformulating it in a more explicit way that is analogous to the ribbon tilings of unshifted shapes due to James and Kerber<cit.>. We also look at standard and semi-standard fillings of these shapes, and define shifted k-ribbon functions. We give a positive expansion in terms of Schur's Q-functions, analogous to the unshifted case.
The positivity result hints at the possibility of defining a type B analogue for the LLT polynomials which could have far reaching applications. We give such a definition using Type A LLT polynomials and prove its Schur Q-positivity. We further show that there is no natural expansion of the spin statistic that would allow a direct definition using shifted ribbon tableaux, and provide some counter examples that should prove valuable for further research.
The layout of this paper is as follows: In Section 2, we recall the notions of Schur functions, Schur's Q-functions and ribbon tableaux. In Section 3, we give a graphical description of k-ribbons on a shifted diagram, which differs from the standard case in that we have some 'double ribbons', which are allowed to contain 2×2 boxes. We define the shifted k-ribbon tableaux and the corresponding P- and Q-functions, as well as state our main theorem giving an expansion of a shifted ribbon Q-function in terms of Schur's Q= funtions. Sections 4 and 5 give the combinatorial constructions necessary to prove this, including a new type of object that comes up in shifted k-quotients which we call folded tableaux. We give bijections between ribbon fillings of a shifted diagram and its k-quotient, both in the standard and semi-standard case. In Section 5, we give a description of the shifted ribbon functions in terms of peak functions. Lastly, in Section 7, we define a q-refinement of the shifted ribbon function and prove its Schur Q-positivity. We further discuss the difficulties of defining a direct analogue of the spin statistic from the unshifted case, and provide some counter-examples.
§ PRELIMINARIES
§.§ Schur Functions
A partition of n is a list μ=(μ_1,μ_2,…,μ_k) of non-increasing positive integers adding up to n, called its parts. Here, n is called the size of the partition, denoted |μ|, and the number of its parts is called its height denoted ht(μ). With every partition, we associate a Young diagram, an array with μ_i boxes on row i.
A semi-standard Young tableau of shape μ is a filling of its boxes with positive integers such that each column will be increasing from bottom to top, and each row will be non-decreasing from left to right. A semi-standard tableau that contains each of the numbers from 1 to n exactly once is called standard. We will denote the set of semi-standard tableaux of shape μ by SSYT(μ), and the set of standard ones by SYT(μ).
For a partition μ we define its Schur function as follows:
s_μ(X)=∑_T∈ SSYT(μ)X^T
Here X^T denotes the monomial where the power of x_i is given by the number of times i occurs in T. The semi-standard filling in Figure <ref> corresponds to the monomial x_1x_2^3x_3x_4^2x_5x_7.
The reading word of a tableau is a reading of all its labels from left to right, top to bottom.
For example, the semi-standard tableau from Figure <ref> has the reading word 547231224, where as the standard one has the reading word 748261359.
Note that the reading word of a standard tableau S gives a permutation of numbers from 1 to |n|, so we can talk about its descent, peak and spike sets. The descent set of a standard tableau T is defined as follows:
Des(T) = {i| i is to the left of i+1 in the reading word of T}⊂ [n-1]
In general, for any set D ⊂ [n], the peak and spike sets of D are given by:
Peak(D) = {i| i∈D and i-1 ∉D}
Spike(D) = {i| i∈D and i-1 ∉D or i∉D and i-1 ∈D}.
Throughout this work, we will mainly be interested in the case when D is the descent set of the reading word for a tableau. For a tableau T, we will use to notations Peak(T) and Spike(T) to denote Peak(Des(T)) and Spike(Des(T)) respectively. The standard tableau from Figure <ref> has descent, peak and spike sets {1,3,5,6}, {3,5} and {2,3,4,5,7} respectively. In 1984, Gessel<cit.> has shown that the Schur function for a partition μ can be expressed in terms of descent sets:
s_μ(X)=∑_T∈ SYT(μ)F_Des(T)(X)
where F_D(X), D∈[n-1] denotes Gessel's fundamental basis for quasisymmetric functions defined by:
F_D(X)=∑_i_1≤ i_2≤⋯≤ i_m
t∈ D ⇒ i_t ≠ i_t+1x_i_1x_i_2x_i_3⋯ x_i_m
This formula allows us to calculate the Schur function of a partition using only its standard fillings. For example, the Schur function of (3,2), whose standard fillings are given in Figure <ref> is:
s_(3,2)(X)=F_3(X)+F_2(X)+F_4(X)+2F_2,4(X)
§.§ Ribbon Tableaux
Given two diagrams μ⊂ν, the skew diagram ν / μ is the diagram of ν minus the cells that correspond to μ. A k-ribbon on an unshifted diagram is a connected skew-diagram that contains no 2×2 square. A k-ribbon R is removable from diagram μ if μ / ν=R for some ν⊂μ. A diagram with no removable k-ribbon is called a k-core.
On a given k-ribbon R, the rightmost lowest cell is called the head of the R. A set of disjoint ribbons form a horizontal strip if their disjoint union is a (not necessarily connected) skew-shape and their heads lie on different columns. A semi-standard k-ribbon tableau of shape μ is a sequence of shifted diagrams μ_0⊂μ_1⊂⋯⊂μ_n=μ where μ_0 is a k-core, and each μ_i/ μ_i-1 is a horizontal k-ribbon strip, the ribbon on which we label by i. A semi-standard k-ribbon tableau is called standard if all labels from 1 to n occur exactly once. The generating function for the k-ribbon tableaux of shape μ is given by:
GF_μ/μ_0^(k)(X)=∑_T ∈ SSRT_k(μ)X^T=∑_S ∈ SRT_k(μ)F_Des(S)X
where SSRT_k(μ) denotes the set of semi-standard k-ribbon tableaux of shape μ, and SRT_k(μ) denotes the set of standard ones.
James and Kerber <cit.> showed that there is a weight-preserving bijection between semi-standard ribbon tableaux of shape μ, and semi-standard fillings of a k-tuple of unshifted shapes (μ^0, μ^1,…,μ^k-1) called the k-quotient of μ. This shows that:
GF_μ/μ_0^(k)(X)=s_μ^0(X)s_μ^1(X)⋯ s_μ^k-1(X)
The spin of a ribbon R, defined by Lascoux, Leclerc and Thibon <cit.> is (|R|-ht(R)-1)/2, which is not necessarily an integer. For a semi-standard k-ribbon tableaux T of shape μ, we define the spin of T to be the sum of the spins of all ribbons on T. The cospin of T is given by spin(T*)-spin(T) where T* is the semi-standard k-ribbon tableaux of shape μ with the maximum spin. The cospin is an integer for every tiling T.
Multiplying each tableau by a variable q raised to its cospin gives us the LLT-polynomial:
GF^(k)_μ/μ_0(X;q)=∑_T ∈ SRT_k(μ) q^cospin (T) F_Des(T)(X)
The LLT-polynomials can be written as a sum of Schur polynomials with coefficients from ℤ^+[q]<cit.>.
§.§ Shifted Tableaux
A partition λ=(λ_1, λ_2,…,λ_k) is called strict if all its parts are distinct. With every strict partition, we associate a shifted diagram, which is an array with λ_i boxes on row i, where row i is shifted k-i steps to the right, forming a staircase shape. For any cell C on a shifted diagram, we define its diagonal value to be diag(C)=col(C)-row(C)+1. Note that the smallest diagonal value is 1 and is attained only at the leftmost diagonal which is denoted the main diagonal of λ.
A semi-standard shifted tableau of shape λ is a filling of its boxes with elements from the marked alphabet 1'<1<2'<2<3'<3<⋯ such that each row will be non-decreasing from left to right with no repeated marked numbers, and each column will be non-decreasing from bottom to top with no repeated unmarked numbers. A semi-standard shifted tableau of shape λ that contains each of the numbers 1,2,…,|λ| exactly once (possibly marked), it is called marked standard, and if they are all unmarked it is called standard. We will denote the set of semi-standard shifted tableaux of shape λ by SSShT(λ), the set of marked standard ones by SShT±(λ) and the set of the standard ones by SShT(λ). Figure <ref> includes the shifted diagram for λ=(4,3,1), as well as examples of semi-standard, marked standard and standard shifted tableaux. The tableaux given here are related by a standardization algorithm, which will be introduced in Section <ref>.
Schur's Q- and P-functions for a strict partition λ are defined as follows:
Q_λ(X) = ∑_S∈ SSShT(λ)X^|S|
P_λ(X) = 2^-ht(λ)∑_S∈ SSShT(λ)X^|S| = ∑_S∈ SSShT^*(λ)X^|S|
where S∈ SSShT^*(λ) denotes the set of semi-standard tableaux of shape λ with no marked entries on the main diagonal, and X^|S| is the monomial where the power of x_i is equal to the number of times i or i' occurs in S. The semistandard filling in Figure <ref>, for example, corresponds to the monomial x_1^2x_2^3x_3^2x_4.
The reading word of a shifted tableau is, like in the unshifted case, a reading of all its labels from left to right, top to bottom. These definitions
can be extended to the reading words of marked standard tableaux by
first moving all marked coordinates to the beginning and then
reversing their order and working with the corresponding word, so that Des(74'6'8123'5)=Des(36478125)={2,5}.
If i ∈ Des(T), then i∈ Des(T') if and only if i is unmarked in T'. If i ∉ Des(T), then i ∈ Des(T') if and only if i+1 is marked in T.
This follows directly from the definition of descents on marked tableaux.
Like in the case of Schur functions, Schur's Q-functions can be expanded in terms of the fundemental quasisymmetric functions:
Q_λ(X)=∑_T'∈ SShT±(λ)F_Des(T')(X)
For this expansion, we only look at the marked standard tableaux of shape λ. An expansion that also eliminates the markings and only considers the standard fillings was given by Stembridge<cit.>:
Q_λ(X)=∑_T∈ SShT(λ)2^|Peak(T)|+1G_Peak(T)(X)
Here, the functions G_P, where P is a subset of [2,3,..,n-1] with no consecutive entries, are the peak functions are defined in <cit.> by:
G_P(X)=∑_D∈ [n-1]
Spike(D) ⊃ PF_D(X)
Given two strict partitions μ and λ with μ⊂λ, the skew-shifted diagram λ\μ is the diagram for λ with the cells corresponding to the diagram of μ deleted.
We can apply the above definitions to the skew-shifted diagrams to get skew-shifted tableaux. More precisely, the set of semi-standard shifted tableaux of shape λ\μ, denoted SSShT(λ\μ) is given by all the fillings of λ\μ from the marked alphabet with non-decreasing columns and rows such that we have no unmarked numbers repeated along columns and no marked numbers repeated along rows. We will denote the marked standard fillings of λ\μ (where we use each number from 1 to n once, possibly marked) by SShT±(λ\μ) and its standard fillings (where we use each number from 1 to n once, unmarked) by SShT(λ\μ) This gives rise to a skew analogue for Schur's Q-function:
Q_λ\μ(X)=∑_S ∈ SSShT(λ\μ) X^|
S|=∑_T' ∈ SShT±(λ\μ)F_Des(T')(X)=∑_T ∈ SShT(λ\μ)G_Peak(T)(X)
It was shown by Stembridge that the skew-shifted Q-functions expand positively into Schur's Q-functions:
(Stembridge <cit.>) There exist coefficients f^λ_μ,ν∈ℕ satisfying:
Q_λ\μ(X)=∑_νf^λ_μ,νQ_ν(X) Q_μ(X)Q_ν(X)=∑_λf^λ_μ,νQ_λ(X)
where f^λ_μ,ν=0 unless |μ|+|ν|=|λ|.
§ SHIFTED RIBBON TABLEAUX
§.§ Ribbons on Shifted Diagrams
On a shifted diagram, we call the columns strictly to the left of the last row its shifted region, and the rest its unshifted region. Note that the unshifted region uniquely determines the diagram [We deviate from convention in defining the shifted region, to define hook lengths more easily.].
The definition of the hook of a cell on a shifted diagram depends on whether the cell falls into the shifted region.
For any cell C in the unshifted region, the hook of C is the union of C, with the cells above it in its column, and the cells to its right in the row. For a cell in the shifted region, its hook additionally includes the row of cells directly above the highest cell in the column of C. The number of cells in its hook is called the hook length of C.
We define a single-ribbon on λ to be a connected skew-shifted diagram with each cell on a different diagonal (Equivalently, not containing any 2×2 square).
A single ribbon R is removable if λ -R is also a shifted diagram.
Some important notations we will use about ribbons throughout the paper are heads and tails of ribbons. The cell with the highest diagonal value will be called the head of R, denoted by H(R), and the one with the lowest will be called the tail of R, denoted T(R). We will also use |R| for the size of a ribbon (the number of cells it contains) and ht(R) for the height of a ribbon (the number of rows of λ that it intersects).
Note that for a single-ribbon R, |R|=diag(H_R) -diag(T_R) +1
A double-ribbon of size k is a union of two disjoint single-ribbons R and S of sizes r≥ s with r+s=k such that the tail of R is on the main diagonal of λ, and the tail of S is on the main diagonal of λ - R, and their union forms a skew-shifted shape. A double ribbon Q is removable if λ -Q is also a shifted diagram.
The head of Q=(R,S) is the head of R, and its tail is the tail of s.
A k-ribbon is a single or double ribbon of size k.
For any removable k-ribbon R on λ, there is no cell on R strictly to the right or strictly below H(R).
As λ\ R is also a shifted diagram, if R includes a cell strictly to the right of H(R), it will also contain the cell to the right of λ in the same row. Similarly, if R has a cell below H(R), it will also include the cell below λ in the same column. In both cases, R has a cell with a higher diagonal value than H(R), giving us a contradiction.
A shifted diagram λ has a removable single k-ribbon with diag(H_R)=m if and only if it has a part of size m+k and no part of size m. If it exists, it is the unique ribbon R where λ-R has the part m+k replaced with m. Furthermore, λ has a removable double k-ribbon with diag(H_R)=a if and only if it has two parts of size a and k-a<a, the ribbon being the unique R where λ-R is λ with the two parts removed.
If λ has a part of size m+k and no part of size m, then removing the outermost cells from diagonals m+k to m+1 gives us a ribbon of size k, with diag(H_R)=m. As the ribbon is removable, λ\ R is itself a skew-shifted diagram, which means there is no cell on λ above T_R or to the left of H_R, and R contains the outermost cell of each diagonal from m+k to m+1.
As H_R is at the end of a row, and has diagonal value m+k, λ contains a part of size m+k. Similarly, T_R is on a row with at least m+1 cells, and as the row above has no cell above T_R it has less than m cells. The case for the double ribbon follows as the double ribbon is a union of two single ribbons, and a ribbon of size a with diag(H_R)=a will have its tail on the main diagonal.
A corollary of this proposition is that no shape can have a double ribbon of size (t,t).
A shifted diagram λ admits no removable k-ribbon iff it has no cells with hook length equal to k.
In this case, we call λ a k-core.
We claim that there is a bijection between removable k-ribbons and cells with hook length equal to k, where cells in the unshifted part correspond to single ribbons and cells in the shifted part correspond to double ribbons. Under this bijection it is clear that if a diagram admits no k ribbons, it can not have a cell with hook length k and vice versa.
Let C have hook length equal to k. First let us look at the case when C is in the unshifted part. Let R be the single k-ribbon consisting of the outermost cell on each diagonal that the hook of C passes. As the head and tail of R are the endpoints of the hook of C, R is removable. Conversely, if R is a single k-ribbon, the cell on the row of H(R) and the column of T(R) has hook length k and is on the unshifted part.
Now let us assume C is a cell in the shifted part, so that its hook includes the row above the column of C. Assume this row has length t. This means, C is on a row of size t-k, and by Proposition <ref> the shape has a unique removable double ribbon of size (t,k-t). Conversely, if R is a removable double ribbon of size (t,k-t) with t<t-k, the diagram has a rows i and j with sizes t and t-k. The cell on row j and column right below row i falls on the shifted part and has hook length k.
§.§ The k-Abacus Correspondence
In this section, we will show that the ribbons on shifted tableaux can be expressed using the k-abacus notation of James and Kerber<cit.>.
A k-abacus consists of runners labeled by 1, 2, 3, …,k, and numbers placed on these runners as follows:
1 2 3 ⋯ k
1 2 3 ⋯ k
k+1 k+2 k+3 ⋯ 2k
2k+1 2k+2 2k+3 ⋯ 3k
⋯ ⋯ ⋯ ⋯ ⋯
Two runners i and j are called k-conjugate if i+j=k. To any shifted diagram λ =(λ_1,λ_2,…,λ_n) we will associate the k-abacus with beads on positions λ_1,λ_2,…,λ_n.
For example, for the diagram λ=(16, 11, 10, 9, 8, 7, 4, 3, 1) we get the 5-abacus:
a_1 a_2 a_3 a_4 a_5
∙ ∘ ∙ ∙ ∘
∘ ∙ ∙ ∙ ∙
∙ ∘ ∘ ∘ ∘
∙ ∘ ∘ ∘ ∘
∘ ∘ ∘ ∘ ∘
⋯ ⋯ ⋯ ⋯ ⋯
Given a strict partition λ, we identify each runner a_i in its abacus with a shifted shape α^(i), by treating the runners as the abacus of a shifted 1-core. More precisely, α^(i) will be the shifted shape (α^(i)_1,α^(i)_2,…,α^(i)_t) where k(α^(i)_1-1)+i,…,k(α^(i)_t-1)+i are the parts of λ that are equal to i modulo k. We will call the k-tuple (α^(1),α^(2),…,α^(n)) the abacus representation λ. The 5-abacus representation of the example (16, 11, 10, 9, 8, 7, 4, 3, 1) with the abacus above is (431,2,21,21,2).
There are two types of moves allowed on the k-abacus of λ<cit.>:
* Type I Move: Sliding one bead one position higher in its runner if that position is unoccupied, or removing a bead from the top row of column k
* Type II Move: Removing two beads from the first row, if they are on two conjugate runners.
After a move on the k-abacus, we get a new shifted diagram λ* ⊂λ. A Type I move corresponds to replacing a part of size m+k with one of size m, whereas a Type II move corresponds to removing two parts of sizes adding up to k. By Proposition <ref>, we have the following correspondence:
Making a move on the k-abacus of λ is equivalent to removing a k-ribbon from λ. In particular,
* Single-Ribbon Correspondence: Making the Type I move from position m+k to position m is equivalent to removing a single-ribbon A with diag(H_A)=m+k and diag(T_A)=m+1 (where the case m=0 is removing a bead from the top row of column k) .
* Double-Ribbon Correspondence: Making the Type II move removing top beads t and k-t from conjugate runners a_t and a_k-t equivalent to removing a double-ribbon of size (t,k-t).
This theorem implies that the k-core of a shifted diagram corresponds to the k-core of the abacus. As the final state of the abacus is independent of the order of the moves <cit.>, we have the following corollary.
The k-core of a shifted diagram is unique.
§.§ Shifted Ribbon Tableaux
A standard k-ribbon tableau of shape λ is a sequence of shifted diagrams λ^(0)⊂λ^(1)⊂⋯⊂λ^(n)=λ, where each A_i=λ^(i)\λ^(i-1), i=1,2..n is a k-ribbon, and λ^(0) is a k-core. For each i, we label ribbon A_i with i.
A skew-shifted diagram S on a shifted diagram λ is called a horizontal k-ribbon strip (resp. vertical k-ribbon strip) if there exists a sequence of shifted diagrams λ^(0)⊂λ^(1)⊂⋯⊂λ^(t)=λ, where:
* Each R_i:=λ^(i)∖λ^(i-1) is a k-ribbon.
* H(R_i) is strictly to the right of (resp. strictly above) H(R_i-1) for each i.
* S= ⋃_i=1^n R_i = λ∖λ^(0).
A semi-standard shifted k-ribbon tableau is given by a sequence λ^(0)⊂λ^(1')⊂λ^(1)⊂λ^(2')⊂λ^(2)⊂⋯⊂λ^(n)=λ, where:
* λ^(0) is the k-core of λ.
* Each λ^(i)∖λ^(i') is a (possibly empty) horizontal k-strip.
* Each λ^(i')∖λ^(i-1) is a (possibly empty) vertical k-strip.
We number the ribbons on the strip λ^(i)∖λ^(i') by i and the ribbons forming the strip λ^(i')∖λ^(i-1) by i' for each i=1,2,…,n.
For a shifted shape λ, we define its k-ribbon Q and P-functions as follows:
RQ^(k)_λ(X)=∑_S∈ SSShT^(k)(λ)X^|S|,
RP^(k)_λ(X)=∑_S∈ SSShT^*(k)(λ)X^|S|
where SSShT^(k)(λ) is the set of semi-standard shifted k-ribbon tableaux of shape λ, and SSShT^*(k)(λ) is its subset consisting of tableaux with no marked entries on the ribbons that have boxes on the main diagonal.
The example illustrated in Figure <ref> gives the following ribbon Q- and P-functions restricted to two variables:
RQ^(3)_(5,4,2,1)(x_1,x_2) = 4x_1^3x_2+8x_1^2x_2^2+4x_1x_2^3=Q_3,1(x_1,x_2)
RP^(3)_(5,4,2,1)(x_1,x_2) = x_1^3x_2+2x_1^2x_2^2+x_1x_2^3=P_3,1(x_1,x_2)
Note that in this example, the ribbon Q and P functions are Schur Q and P positive respectively. One of our main results in this paper will be to show that the positivity holds true in general, and give a formula for the ribbon Q-functions in terms of Schur's Q-functions.
All the shifted k-ribbon tableux of shape λ have the same number of ribbons that have a box on the main diagonal, called the k-length of λ, denoted ℓ^(k)(λ). Consequentally, the Q and P k-ribbon functions for λ are related by a scalar:
RQ^(k)_λ(X)=2^ℓ^(k)(λ) RP^(k)_λ(X)
By Theorem <ref>, the ribbons that have a box on the main diagonal correspond to the moves on the abacus where beads are removed. The total number is independent of the order of the moves. In fact, if we denote the number of beads on runner i by |a_i| then:
ℓ^(k)(λ)=|a_k|+∑_i<(k/2)max{|a_i|,|a_k-i|}
§ FOLDED TABLEAUX
In this section, we will introduce an operation combining two shifted shapes to get an unshifted shape with a specialized diagonal, which we will call a folded diagram. We will later use the folded diagrams, along with their corresponding tableaux in describing the k-quotient of a shifted shape. We will use the notation δ_n to denote the staircase partition (n,n-1,…,1) of size n.
A folded diagram Γ=(γ,𝔡) is an unshifted diagram γ called the underlying shape of Γ along with a specialized main diagonal 𝔡 which does not necessarily intersect γ.
Let us say we have a pair of shifted shapes (or equivalently strict partitions) α and β. Denote n=min{ht(α),ht(β)}. Their combination, which will be denoted by α♢β will be the folded diagram we obtain by:
* Step I: If one of the shapes has height m larger than n, delete its m-n leftmost columns, so that both shapes have the same number of boxes in their main diagonals.
* Step II: Transpose α.
* Step III: Paste the two diagrams together along their main diagonals, and label this diagonal 𝔡.
Example:
Any given folded diagram (γ, 𝔡) can be uniquely described as a combination of two shifted shapes.
Let us denote the difference λ\δ_height(λ) by λ'. If 𝔡 is on the shape, going through t≥ 0 boxes, then (γ,𝔡)=α♢β where α = δ_k+t +({γ_t+1,γ_t+2,…,γn})' and β=δ_t+{γ_k+t+1',γ_k+t+2',…,γ_n''}.
If 𝔡 is k≥ 0 units to the right of the bottom left corner and outside the shape (the case k<0 is symmetrical) we have (γ,𝔡)=α♢β where α = δ_k +(γ)' and β is empty.
A standard folded tableau of shape Γ=(γ,𝔡) is a filling of the cells of γ using numbers 1,2,…,n each exactly once, with numbers increasing left to right and bottom to top.
A semi-standard folded tableau of shape Γ=(γ,𝔡) is a semi-standard filling of the skew-shifted diagram γ with the rules inverted weakly above the specialized diagonal (above 𝔡 we can have no repeated unmarked numbers on the same row, and no two repeated marked numbers on the same column).
We define the folded P- and Q-functions as follows.
Q^f_(γ,𝔡)(X)=∑_S∈SSFT(γ,𝔡)X^|S|
P^f_(γ,𝔡)(X)=2^-size(𝔡)Q^f_(γ,𝔡)(X)
where SSFT(γ,𝔡) denotes the set of all semi-standard folded tableaux of shape Γ=(γ,𝔡).
The folded shape ((2,2),𝔡) illusturated in Figures <ref> and <ref> has the following folded P- and Q-functions:
P^f_(22,𝔡)(X)=m_31(X)+2m_22(X)+4m_211(X)+8m_1111(X)= s_31(X)+s_22(X)+s_2111(X)=P_(31)(X)
Q^f_(22,𝔡)(X)=2^ht(𝔡)P^f_(22,𝔡)(X)=4(s_31(X)+s_22(X)+s_2111(X))=Q_(31)(X)
The function Q^f_(γ,𝔡)(X) is independent of the choice of the main diagonal 𝔡.
We will prove this by giving a bijection weight preserving between SSFT(γ,𝔡) with semi-standard fillings of the skew-shifted shape γ.
Let S∈SSFT(γ,𝔡). We start with inverting the markings of all cells weakly above the main diagonal. Each connected piece weakly above the main diagonal containing only i or i's forms a ribbon, with at most one i' on each row (on the rightmost cell), and at most one i on each column (on the lowest cell). The next step is to go from bottom to the top in the ribbon, flipping i' with the leftmost i of each row and flipping the i with the highest i' on each column. As the algorithm corrects the ordering when the process is repeated for all i, we get with a semi-standard skew-shifted filling of γ.
The process can be inverted by inverting the markings weakly above the main diagonal again, and this time working our way from top to bottom on each ribbon, flipping any i' with the lowest i on columns and flipping any i with the rightmost i' in rows.
The folded Q-functions are Schur Q-positive. In fact
Q^f_(γ,𝔡)(X)=Q_λ/δ_n(X)=∑_ϵf^λ_ϵ,δ_nQ_ϵ(X)
where λ is a shifted shape with λ/δ_n=γ and f^λ_ϵ,δ_n are the non-negative integers defined by:
P_ϵP_δ_n=∑_λf^λ_ϵ,δ_nP_λ
As the folded P function depends on the size of the main diagonal, it is not independent of 𝔡. Instead, it tells us that Q^f_γ(X) is divisible by 2^d, where d is the size of the longest diagonal on γ.
An unshifted shape and its conjugate have the same folded Q-function. In particular, for two shifted shapes α and β, Q^f_α♢β(X)=Q^f_β♢α(X)
For an unshifted shape γ, the conjugation operation gives a bijection of folded tableaux (γ,𝔡) with 𝔡 above the shape and folded tableaux (γ^T,𝔡') with 𝔡' below the shape. As the folded Q-function is independent of the placement of the specialized diagonal, we have: Q^f_γ(X)=Q^f_γ^T(X)
§ QUOTIENTS OF RIBBON TABLEAUX
On this section, we will introduce the k-quotient for a shifted diagram, and we will give a bijection between the k-ribbon tableaux and the fillings of its k-quotient. Our definition of the k-quotient extends the one given by Morris and Yaseen in<cit.> by specialized diagonals which we will use for a direct bijection between semi-standard k-ribbon tableaux and semi-standard fillings of its k-quotient.
The k-quotient of a shifted shape λ with k-abacus representation (α^(1),α^(2),…,α^(k)) will consist of a ⌊ k/2⌋ folded shapes and one shifted shape, defined as follows:
Φ^k(λ)=
(α^(1)♢α^(k-1),α^(2)♢α^(k-2),…,α^((k-1)/2)♢α^((k+1)/2),α^(k)) k odd
(α^(1)♢α^(k-1),α^(2)♢α^(k-2),…,α^(k/2-1)♢α^(k/2+1),α^(k/2)♢∅ ,α^(k)) k even
The strict partition λ={16, 11, 10, 9, 8, 7, 4, 3, 1} has the 5-quotient:
Φ^5(λ)={(4,3,1)♢(2,1),(2)♢(2,1),(2)}={((3,3,2),𝔡_1),((3),𝔡_2) ,(2)}
where 𝔡_1 and 𝔡_2 are the specialized diagonals given by the combination operation.
boxsize=4mm
nosmalltableaux
We call a simultaneous semi-standard filling of the ⌊ k/2⌋ folded shapes and the shifted shape a semi-standard filling of the k-quotient. If this filling uses each number from 1 to n exactly once, unmarked, it will be called a standard filling.
There is a bijection Φ^k_λ between standard k-ribbon tableaux of shape λ and standard fillings of its k-quotient preserving diagonal values (two ribbons that have the same diagonal value will be mapped to the same shape and diagonal in the quotient).
Consider a k-ribbon tableau T of shape λ with abacus representation (α^(1),α^(2),…,α^(k)). As each ribbon corresponds to a move in the abacus representation of λ, T uniquely corresponds to a sequence of moves from (α^(1),α^(2)..α^(k)) to the k-core of λ. As we can move independently on each runner pair (a_i,a_k-i) and on a_k, it will suffice to match the moves on a_k moves to shifted tableaux of shape α^(k), and the moves on runner pairs (a_i,a_k-i) to moves on α^(i)♢α^(k-i)for each i.
There is a bijection between sequences of moves from a_k to the empty runner and standard shifted tableaux of shape α^(k), where a move from row d to d-1 on the abacus corresponds to a box on diagonal d.
A bead on the ith row of runner a_k will make a total of i moves, i-1 to one row higher, and one last move to be removed. We map these moves to a row of i boxes so that the move from position j to j-1 will correspond to a box on diagonal j, and the removal move will correspond a box on the main diagonal. This means if a_k has beads on positions i_1> i_2>⋯ >i_t we will map the moves to the shifted diagram α^(k)=(i_1,i_2,…,i_t). Let us number the moves in decreasing order with numbers from 1 to |α^(k)|. This will give us a filling of α^(k), with the conditions that beads can only move to unoccupied positions (meaning rows need to increase left to right), and a bead can only move higher (meaning columns increase bottom to top). Note that these conditions exactly correspond to the tableaux conditions.
Now, we can turn our attention to runner pairs a_i, a_k-i.
There is a bijection between sequences of moves from the pair of runners (a_i,a_k-i) to the abacus core and standard unshifted tableau of shape α^(i)♢α^(k-i), where moves removing beads are mapped to the specialized diagonal of α^(i)♢α^(k-i), and moves on runner a_i (resp. a_k-i) from row r to r-1 are mapped to the diagonal d units to the left (resp. right).
The move sequences on each runner can be matched to a shifted tableau of corresponding shape as in Claim 1, except now we have an additional constraint: To remove a bead from the first row one runner, we must simultaneously remove a bead from the first row of the second runner. This implies that the main diagonals of the two shapes must contain the same numbers, and they will be on the main diagonal. Furthermore, if one runner has q more beads than the other one, these can not be removed, and the moves which depend on the removal of these beads can not be made, which means the the subdiagram of shape δ_q inside the larger shape will be left empty.
When k is even, we can move ribbons up on runner a_k/2 but not remove them, as if it has a conjugate runner with no beads, so any remark we made on α^(i)♢α^(k-i) above automatically applies to α^(k/2)♢∅.
The correspondence of diagonals gives us a way of labeling the diagonals of the quotient to match the values of the original shape. This way, the shifted shape α^(k) will have diagonals 0,k,2k… and the folded shapes α^(i)♢α^(k-i) will have diagonals:{… i+2k,i+k,k-i,2k-i, 3k-i …} where the specialized diagonal 𝔡_i will have the diagonal value k-i.
Note that the diagonal values i < (k-1)/2 do not appear. The reason of this is our convention of setting the head of double ribbon to be the head of the larger piece.
A k-ribbon R has a box on the main diagonal of λ if and only if diag(R)≤ k.
Consider a semi standard k-ribbon tableaux T of shape λ, with |λ|=n. The standardization of T, denoted St(T) is the standard k-ribbon tableaux of shape λ that we obtain by the following numbering:
* We number the cells in the order 1'<1<2'<2<⋯
* If there is more than one cell of label i, we order them so that their diagonal values will be increasing.
* If there is more than one cell of label i', we order them so that their diagonal values will be decreasing.
St(T) is well defined.
As ribbons labeled i form a horizontal strip, they can be removed in the order diagonals are increasing. Similarly, tibbons labeled i' form a vertical strip and can be removed in the order their diagonals are decreasing.
Using the labeling of the diagonals from Remark <ref>, we can also do the same standardization operation on the semi-standard fillings of the k-quotient.
We can extend Φ^k_λ to a weight preserving bijection between semi-standard k-ribbon tableaux of shape λ, and semi-standard fillings of its k-quotient.
Let T be a semi-standard k-ribbon tableaux of shape λ, given by the sequence λ_0 ⊂λ_1'⊂λ_1⊂λ_2'⊂λ_2 ⊂⋯⊂λ_t=λ of shifted diagrams. As our definition of standardization respects the inclusion order, St(T) restricted to any λ_i gives a standardization of λ_i. The same is true for λ_i's.
Let us apply the Φ^k_λ to the standardization of T. This gives us a bijection ϕ between ribbons of T and the boxes on the k-quotient. This also can be restricted to the subdiagrams λ_i and λ_i', giving a sequence Φ^k(St(λ_0)) ⊂Φ^k(St(λ_1'))⊂Φ^k(St(λ_1)) ⊂⋯⊂Φ^k(λ_t)=Φ^k(λ). Here, the subset relation is defined pointwise in the (k+1)-tuples of quotient diagrams.
The filling of the k-quotient obtained by this is a semi-standard filling, and is equal to Φ^k_λ(T) if the filling T is standard.
The second part of the claim follows from the definition of Φ^k_λ(T). For the first part, we need to show that the filling of each a^(i)♢ a^(k-i) gives a semi-standard folded shape and the filling of a^(k) gives a semi-standard shifted shape. Let us look first at the case of a^(k).
To obtain a contradiction, let us assume there are two boxes B_1 and B_2 on a^(k) that are marked j and are on the same column (so that they do not form a horizontal 1-strip). Without loss of generality, we can take B_2 to be the higher one. This means if we name the corresponding ribbons on λ respectively R_1 and R_2, we have diag(R_2) < diag(R_1). Also, as they both are labeled j, they are on a horizontal k-strip, specifically H(R_2) lies strictly to the right of H(R_2). These together imply that H(R_2) must also be strictly above H(R_1). Remember that the cells labeled j form a skew shape, so the cell C that is on the same row as H(R_1) and the same column as H(R_2) must also be in λ_i with its diagonal value higher than those of H(R_1) and H(R_2). This implies it is not on R_1 or R_2. It must be on a different ribbon R_3 on the horizontal k-strip. As R_2 and R_3 have boxes in the same column with the box of R_2 above, we can not remove R_3 before R_2. This implies H(R_3) must be strictly to the right of H(R_2) and consequently to the right of C. This can not happen as C is on R_3, by <ref>. Symmetrically, no two cells marked j' can be on the same row, so we indeed have a semi-standard filling of a^(k). Now consider the boxes marked j on a^(i)♢ a_(k-i) in some i. As they come from the difference Φ^k(St(λ_j)) \Φ^k(St(λ_j')), they form a skew shape. Also, the boxes that are labeled j to the right of the main diagonal form a horizontal strip by the same reasoning in the case of λ. The boxes labeled j to the left of the main diagonal form a vertical strip, as we have the inverted version of the same rules. The j' case is again symmetrical.
Now let us define the inverse of this operation. Given a semi-standard filling T̅ of the k-quotient, as the k-quotient has the diagonal values induced by λ, we can apply the same standardization algorithm to the quotient, to get a standard filling St(T̅) of the quotient. Applying Φ^k-1_λ to this filing gives a standard filling of λ. We can use this bijection between cells of the quotient and ribbons of λ to carry the labels in (̅T) to the corresponding ribbons in λ. Note that, this inverts the above operation by definition.
The inverse operation takes T̅ to a semi-standard filling of λ.
Let R and S be two ribbons marked j on λ. We will show that they form an horizontal strip. The case of j' is symmetrical. First note that we can not have diag(R)=diag(S), as that would imply the corresponding cells in the quotient are both in the same a^(i)♢ a^(k-i) (or both in a^(k)) on the same diagonal, which is not possible. Let us assume, without loss of generality, that diag(R) > diag(S). Then, in the standardization, the label of R will be higher than the label of S, meaning R can be removed before S: H(R) can not be below H(S) in the same column. In this case H(R) is strictly to the right of H(S) implying they form a horizontal strip, as otherwise H(R) would be strictly below H(S) in a row strictly to the left, giving us no possible way to label the ribbon containing the cell C in the same row as H(R) and the same column as H(S).
This bijective relationship shows that the k-ribbon Q-function is equal to the product of the Q-functions of its quotient:
The k-ribbon Q-function of a shifted shape λ with k-abacus representation (α^(1),α^(2),… ,α^(k)) has the following expansion in terms of Schur's Q-functions:
RQ^(k)_λ(X)=Q_a^(k)(X)∏_i≤⌊ k/2⌋Q_μ_i(X)
where μ_i is the underlying skew-unshifted shape of a^(i)♢ a^(k-i) if i<k/2 and a^(k/2)♢∅ if i=k/2.
The Q-ribbon functions expand positively into Schur's Q-functions.
Follows from the last theorem and the Schur Q-positivity of the skew-shifted Schur Q-functions <ref>.
Note that the Schur's Q-functions are themselves k-ribbon Q-functions for any k, as kλ=(kλ_1,kλ_2,…,kλ_n) has RQ^(k)_kλ(X)=Q_λ.
§ PEAK FUNCTIONS OF RIBBON TABLEAUX
The reading word of a k-ribbon tableau is a reading of the labels on the heads of the ribbons, left to right, top to bottom.
A marked standard shifted k-ribbon tableau T' of shape λ is defined to be a standard shifted k-ribbon tableau T of shape λ together with a subset M of [n] determining the marked coordinates. On the Young diagram, for all i in M, we replace the label of R_i with i'. We will denote the set of all the marked versions of T by the set Mark(T).
The k-ribbon Q-function of a shifted shape λ we have can be written in terms of descent functions and peak functions as follows:
RQ^(k)_λ(X)=∑_T'∈ SShT±^(k)(λ)F_Des(T') RQ^(k)_λ(X)=∑_T∈ SShT^(k)(λ)2^|Peak(Des(T))|+1G_Peak(Des(T))
where SShT±^(k)(λ) is the set of marked standard shifted k-ribbon tableaux of shape λ.
A run of a subset D of [n] is a maximal subset of consecutive numbers. We will denote by Run(D) the set of the runs of D. Note that ⋃̇_Run(D)=D. For example, D={2,3,5,8,9,10}={2,3}∪{5}∪{8,9,10}.
We can calculate the number of the peaks of a tableaux T from its descent set as follows:
|P|=
|Run(Des(T))| 1∉ Des(T)
|Run(Des(T))| - 1 1∈ Des(T)
This follows from the fact that the elements j of the peak set satisfy j ∈ D and j-1∉ D for all j >1, so that elements of the peak set are given by the smallest elements of each run, with the exception of the case if there is a run starting with 1.
For any T' ∈ Mark(T), the descent set of T' is independent of whether a given i≤ n is marked if and only if:
* i>1 with i-1 ∈ Des(T), i ∉ Des(T) or
* i=1 ∉ Des(T)
The number of such i is given by |Peak(T)|-1.
The first part comes from Lemma <ref>. Also, a number i satisfies these conditions iff i is one lower than the lowest number of a run of Des(T). That means, if 1∉ D, it will be equal to the number of runs of Des(T). If 1∈ T, it will be |Run(Des(T))|-1.
For any T' ∈ Mark(T), we have :
Spike(T') ⊃ Peak(T)
Note that if i∈ Peak(Des(T)), we have i ∈ Des(T) and i-1 ∉ Des(T). For any given T' if i is unmarked on T', then i ∈ Des(T') and i-1 ∉ Des(T') so i ∈ Spike(T'). Otherwise i is marked, so that we have i ∉ Des(T') and i-1 ∈ Des(T'), implying again that i ∈ Spike(T').
The proposition above shows that the descent map takes the elements of Mark(T) to subsets D of [n-1] with Spike(D) ⊂ Peak(T). Next, we will show that this map is surjective. In fact, we will prove the stronger statement that the preimage of every element is of the same size.
Assume D is a subset of [n-1] satisfying Spike(D) ⊃ Peak(T). Then, there is a marked version T' of T such that Des(T')=D.
Let us generate a marked version T' of T as follows: Starting with i=1, at Step i we mark i if i∈ Des(T) and i ∉ D, and we mark i+1 if i ∉ Des(T) and i ∈ D (marking the same number a second time has no effect). Then we move on to the next number, till we go through all i≤ n-1.
Let us verify that the descent set of T' is indeed equal to D. For a fixed i assume i∈ Des(T). Then by Lemma <ref> i is a descent of T' iff i is unmarked. Therefore, it is sufficent to show i is unmarked iff i∈ D. If i ∉ D, then we marked i onStep i, so i ∈ Des(T'). Otherwise i ∈ D, and we can only have marked i at step i-1. This implies i-1 ∉ Des(T) and i-1 ∈ D. This contradicts our assumption Spike(D) ⊃ Peak(T) as i is a peak of T but not a spike of T. The case i∉ Des(T) is similar.
The descent map taking elements of Mark(T) to subsets D of [n-1] with Spike(D) ⊃ Peak(Des(T)) is a 2^m to one cover, where m=|Peak(Des(T))|+1
The number of subsets D of [n-1] with Spike(D) ⊃ Peak(Des(T)) is given by 2^n-1 -|Peak(Des(T))|. By Lemma <ref>, we know that the descent map is surjective. By Lemma <ref>, the preimage of each element under the descent map contains at least 2^m elements. 2^m× 2^n-1 -|Peak(Des(T))|=2^n which is the cardinality of Mark(T), so the preimage of each element must contain exactly 2^m elements.
Now we are ready to prove Theorem <ref> from the beginning of the section.
Let S be a semi-standard k-ribbon tableaux of shape λ. We have already defined the standardization of S (Definition <ref>). Let us denote by St'(S) the marked standardization of S, which is simply standardization while keeping the marked cells marked.
We will show that there is a bijection ϕ_T' between semi-standard k-ribbon tableaux S that standardize to T' and the combinations refining Des(T'), satisfying x^|S|=x^|ϕ_T'(S)|.
This will imply:
∑_T'∈ SShT±^(k)(λ)F_Des(T')(X)=∑_T'∑_C∈ Ref(Des(T'))X^C=∑_T'∑_Cx^|ϕ_T'^-1(C)|=∑_S∈ SSShT^(k)(λ)X^|S|=RQ^(k)_λ(X)
where for a combination C=(c_1,c_2,…,c_t) we use X^C to denote x_1^c_1x_2^c_2⋯ x_t^c_t.
Assume S satisfies St'(S)=T'. We define ϕ_T'(S)=(i_1,i_2.. ) where i_m stands for the total number of cells labelled m or m' on S. As S has n ribbons, ϕ_T'(S) will be a combination of n that satisfies x^|S|=x^|ϕ_T'(S)|.
S refines Des(T').
Let S be a semi-standard filling with St'(S)=T'. Consider the pre-image of ribbon R_i (the unique ribbon labeled i or i' on T') under St'. We will denote the label of this ribbon in S by St'^-1(i). To prove that S refines Des(T'), it is sufficient to show that if i is a descent of T', then St'^-1(i) and St'^-1(i+1) are not both elements of {m,m'} for any m (Note that, by the standardization algorithm, we will have St'^-1(i)≤ St'^-1(i+1)in any case).
Let i be a descent of T'. By Lemma <ref>, there are two possiblities:
* Case 1, i ∈ Des(T) and i is not marked in T': Then St'^-1(i) is an unmarked number m. St'^-1(i) ≥ m, so it can not be m'. Assume it is also m. Then, we have two ribbons labeled m, but diag(R_i) > diag(R_i+1) by the definition of standardization (Definition <ref>).
* Case 2, i ∉ Des(T) and i+1 is marked in T': This means St'^-1(i+1) is a marked number m', and St'^-1(i) ≥ m' can not be m. It can not be m' either, because as in the first case, we get two ribbons labeled m' but diag(R_i) < diag(R_i+1), which as in Case 1, contradicts the definition of standardization.
For any combination C refining Des(T) there is a unique S that standardizes to T' with ϕ_T'(S)=C
Let C=(c_1,c_2,…,c_T) be a combination of n that refines Des(T'). We will define S by labeling the ribbons R_1 to R_c_1 with 1, ribbons R_c_1+1 to R_c_1+c_2 with 2, R_c_1+c_2+1 to R_c_1+c_2+c_3 with 3 and so on, and then marking the image of R_i iff i is marked in T. We need to show that this S is semi-simple, and it standardizes to T'. Uniqueness then, comes from the fact that the placement of the markings are preserved.
Assume ribbons R_i and R_i+1 have the same unmarked label m. Then, i∉ Des(T') and i and i+1 are both not labeled in T', so we must have i ∉ Des(T) by Lemma <ref>. That means, diag(R_i) < diag(R_i+1), so unmarked numbers are ordered so that their diagonals will increase in T'. Similarly, if R_i and R_i+1 both have the same marked label m', then i∈ Des(T') and diag(R_i) > diag(R_i+1). These mean that we have St'(S)=T.
Additionally, we can remove ribbon R_i+1 before R_i. This implies that if both ribbons are labeled m, H(R_i+1) is going to be strictly to the right of H(R_i) as diag(R_i) < diag(R_i+1). If they are both labeled m', H(R_i+1) is going to be strictly above H(R_i) as diag(R_i) > diag(R_i+1).
This proves the expansion of the k-ribbon function in terms of descent functions. The peak function expansion follows by Proposition <ref>:
RQ^(k)_λ(X)=∑_T'∈ SShT±^(k)(λ)F_Des(T')=∑_T∑_T'∈ Mark(T) F_Des(T')=∑_T2^|Peak(Des(T))|+1G_Peak(Des(T))
§ TYPE B LLT POLYNOMIALS
In <cit.>, Lascoux, Leclerc and Thibon give a q-analogue for the k-ribbon that is Schur positive functions for the unshifted case . For this, they use the spin statistic on ribbon tableaux which depends on the total height of its ribbons. In this section, we show that there is no direct way of extending the concept of height to double ribbons that will give positive structure coefficients in the shifted case. Nevertheless, we are able to give a non-trivial q-analogue for the shifted ribbon functions and prove its Schur Q-positivity.
There is no “intrinsic” definition for the height of a double ribbon, which, along with the usual definition of heights for the single ribbon, gives a Schur Q-positive or even a Schur positive function. Here by intrinsic, we mean there is no definition that only comes from the shape of the double ribbon, and is independent of its placement or the other ribbons in the shape.
If we consider the example in Fig <ref>, we can see that the only difference between the two fillings is the placement of ribbons 2 and 3. For any intrinsic definition, the heights of the double ribbons 4 and 1 would match in the two fillings. The total height of 2 and 3 being higher on the shape on the right, we get a function 4q^cG_2+4q^dG_3 with c≠ d which is not Schur P-positive. In fact, G_2 and G_3 by themselves are not even Schur positive or symmetric functions.
Another example where all the tableaux need to have the same cospin to obtain Schur Q-positivity is given in Figure <ref>. A common point of these two examples with trivial cospin is that both have only one piece in their 3-quotient, which motivates a slightly technical q-analogue defined through the quotient.
For a shifted shape λ with k-abacus representation (α^(1),α^(2),…,α^(k)), we define the q-analogue of the shifted k-ribbon Q-function as follows:
QR^(k)_λ(X;q):= Q_α^(k)(X) ∑_T ∈ SRT_⌊ k/2 ⌋(μ) q^cospin (T)2^|Peak(T)|+1 F_Peak(T)(X)
where μ is the unshifted partition corresponding to the ⌊ k/2 ⌋-quotient (μ^1,μ^2,…,μ^⌊ k/2 ⌋), with μ^i=a^(i)♢ a^(k-i) if i<k/2 and μ_k/2=a^(k/2)♢∅ when k is even.
Note that when q=1, we get the formulation of QR^(k)_λ(X) given in Equation <ref>, so QR^(k)_λ(X;1)=QR^(k)_λ(X) as desired.
The function QR^(k)_λ(X;q) has an expansion into Schur's Q-functions with coefficients from ℤ^+[q].
Let (α^(1),α^(2),…,α^(k)) be the k-abacus representation for λ, and μ is the be the unshifted partition corresponding to the ⌊ k/2 ⌋-quotient (μ^1,μ^2,…,μ^⌊ k/2 ⌋) as defined in Theorem <ref>. The LLT polynomial for μ satisfies
GF^(⌊ k/2 ⌋)_μ/μ_0(X;q)=∑_T ∈ SRT_⌊ k/2 ⌋(μ) q^cospin (T) F_Des(T)(X)=∑ f_γ^μ(q) s_γ(X).
where γ are unshifted shapes and f_γ^μ(q) have positive integer coefficients. As any unshifted γ can be seen as a skew-shifted shape γ^+/δ_ℓ(γ). Multiplying by 2^|Peak(T)|+1 F_Peak(T)(X) instead F_Des(T)(X) for each γ tableau T corresponds to calculating Q_γ^+/δ_ℓ(γ)(X) instead of s_γ(X). So we have:
QR^(k)_λ(X;q):= Q_α^(k)(X)(∑_T ∈ SRT_k(μ) q^cospin (T) 2^|Peak(T)|+1 F_Peak(T)(X) )= Q_α^(k)(X)(∑ f_γ^μ(q) Q_γ+/δ_ℓ(γ)(X))
As multiplication and skewing operations on Schur's Q-functions give positive expansions into Schur's Q-functions the result follows.
Let us finish with calculating an example.
Consider the shape (9,8,6,2) with the 5-ribbon quotient {(2,1),(2),∅} with standard fillings given in Figure <ref>. It has the Q 5-ribbon function:
QR^(5)_(9,8,6,2)(X)= Q_(3,1)/(1)(X) Q_(3)/(1)(X)= 2Q_(5)(X)+4Q_(4,1)(X)+3Q_(3,2)(X)
Viewed as a 2-quotient, {(2,1),(2)} corresponds to unshifted shape (4,4,1,1) with the LLT polynomial:
F^2_(4,4,1,1)(X;q) = ∑_T ∈ SRT_2(4,4,1,1) q^cospin (T) F_Des(T)(X)=q^2s_(2,2,1)(X)+qs_(3,1,1)(X)+qs_(3,2)(X)+s_(4,1)(X)
Viewing (2,2,1), (3,1,1), (3,2) and (4,1) as skew shifted shapes, we get the following:
QR^(5)_(9,8,6,2)(X;q) = P_∅(X)(q^2Q_(5,3,2)/(2,1)(X)+qQ_(5,4,1)/(2,1)(X)+ qQ_(4,2)/(1)(X)+Q_(5,1)/(1)(X) )
= (q+1)Q_(5)(X)+(q^2+2q+1)Q_(4,1)(X)+(q^2+2q)Q_(3,2)(X)
§ ACKNOWLEGDEMENTS
The author would like to thank Prof. Sami Assaf for valuable direction and encouragement throughout this project.
plain
|
http://arxiv.org/abs/1701.07832v2 | 20170126190003 | Numerical study of anisotropy in a composite Fermi liquid | [
"Matteo Ippoliti",
"Scott D. Geraedts",
"R. N. Bhatt"
] | cond-mat.str-el | [
"cond-mat.str-el",
"cond-mat.mes-hall",
"quant-ph"
] |
Departments of Electrical Engineering and Physics, Princeton University, Princeton NJ 08544, USA
We perform density-matrix renormalization group studies of a two-dimensional electron gas in a high magnetic field and with an anisotropic band mass.
At half-filling in the lowest Landau level, such a system is a Fermi liquid of composite fermions.
By measuring the Fermi surface of these composite fermions, we determine a relationship between the anisotropy of composite fermion dispersion, , and the original anisotropy of the fermion dispersion at zero magnetic field. For systems where the electrons interact via a Coulomb interaction, we find =√() within our numerical accuracy. The same result has been found concurrently in recent experiments.
We also find that the relationship between the anisotropies is dependent on the form of the electron-electron interaction.
Numerical Study of Anisotropy in the Composite Fermi liquid
Matteo Ippoliti, Scott D. Geraedts, and R. N. Bhatt
===========================================================
Two dimensional electron systems at the interface of semiconductors in strong magnetic fields host a wide variety of exotic Abelian and non-Abelian gapped fractional quantum Hall phases <cit.>,
as well as broken symmetry phases such as the nematic, Wigner crystal and bubble phases <cit.>.
These systems have received a lot of interest in the condensed matter physics community, both because of the elegant mathematical structure emanating from Landau level physics, and also because of their ready accessibility under multiple experimental platforms.
Unique among the quantum Hall phases is the gapless phase at half filling in the lowest Landau level, identified as the `composite Fermi liquid' (CFL) <cit.>.
Recently, there has been a revival of interest <cit.> in the CFL, which has a Fermi surface, analogous to the Fermi liquid phase at zero field.
[Much of this interest has to do with the nature of particle-hole symmetry in the CFL, which is not the subject of this paper. To our knowledge the recent `Dirac composite fermion picture'<cit.> should not affect the response to anisotropy measured here.]
Much of the theoretical work on this subject has assumed rotational symmetry, though this symmetry is not central to the physics of quantum Hall systems.
Understanding the quantum Hall effect in the absence of rotational symmetry is an active area of research <cit.>.
Experimentalists have been able to break rotational symmetry in several ways, e.g. by applying parallel magnetic fields <cit.>,
straining their samples <cit.>,
or doing experiments on materials which have anisotropic Fermi surfaces in the absence of magnetic fields, such as many-valley semiconductors <cit.>.
It is therefore very interesting to ask what happens to the various well-understood states of matter when rotational symmetry is broken. In this work, we investigate the effect of breaking rotational symmetry on the gapless composite Fermi liquid state for a half-filled lowest Landau level.
We focus on a simple case of such rotational symmetry breaking (used in previous theoretical investigations of gapped phases <cit.>),
introducing a Fermi surface anisotropy so that the non-interacting part of the Hamiltonian reads:
H_0=1/2m_xΠ_x^2 + 1/2m_yΠ_y^2, Π_i=p_i+e/cA_i ,
where A_i are components of the vector potential corresponding to a uniform magnetic field B along the z-direction. The anisotropy of the Fermi contour at zero magnetic field, , is determined by the ratio of the Fermi wavevectors in perpendicular directions x and y:
= k_F^y/k_F^x = √(m_y/m_x) .
The Fermi contour achieved in the current experiments <cit.> is more complicated than the elliptical one represented by the above Hamiltonian; nevertheless, we expect the above model to describe the substantial x-y anisotropy seen in experiment.
To the single particle Hamiltonian of Eq. (<ref>), we add isotropic, two-body electron-electron interactions, consistent with the appearance of fractional quantum Hall phases [An equivalent model may be obtained with isotropic dispersion and anisotropic interactions, as done in Ref. <cit.>].
The physics of the fractional quantum Hall effect can be described through composite fermions (CFs): bound states of flux quanta and electrons <cit.>.
An open question is how the anisotropy of Eq. (<ref>) is related to the corresponding anisotropy of the composite fermions, denoted .
Much previous work on systems described by Eq. (<ref>) has focussed on the Laughlin ν=1/3 state.
One can write down model states which have a variational parameter related to the anisotropy, where Laughlin's `model wavefunction' corresponds to the isotropic case <cit.>. Comparing these wavefunctions to numerical exact diagonalization data allowed a determination of the relationship between and <cit.>, finding an less anisotropic than .
This result is in agreement with subsequent analytical work by Murthy <cit.>.
Other numerical work has found a transition out of the ν=1/3 state for sufficiently large <cit.>.
In this work, we study anisotropy at filling fraction ν=1/2, where the system realizes a composite Fermi liquid phase.
By computing the anisotropy of the Fermi surface of the composite fermions, we determine the relationship between and .
The CF Fermi surface can also be detected experimentally <cit.>, so unlike in the case for gapped states we can directly compare our results to experimental measurements, in particular experiments by the Shayegan group <cit.> done concurrently with this work.
Since we obtain results for a realistic, microscopic model at ν=1/2, our work is the first study of quantum Hall systems with mass anisotropy that can be directly compared to experiment.
The numerical techniques used to compute for Laughlin states such as ν=1/3 do not apply at ν=1/2,
for a number of reasons.
The variational wavefunction for a CFL has additional variational parameters representing the shape of the Fermi surface <cit.>. On the finite-size systems accessible numerically, these variational parameters take discrete values, and cannot capture small changes in the anisotropy. Since the CFL is gapless, its energy spectrum is strongly dependent on size, which makes it more difficult to interpret.
In this work we employ a different numerical technique, infinite density matrix renormalization group (iDMRG), to study a system on a cylindrical geometry of finite radius but infinite length. This technique has been successful in the study of the isotropic CFL, where a circular Fermi surface was detected <cit.>.
A number of analytical results at ν=1/2 are available. Ref. <cit.> used Chern-Simons theory to argue that ∝, a conclusion supported by Ref. <cit.> where a model wavefunction satisfying this relation was proposed. Ref. <cit.> replaced the realistic Coulomb interaction with a Gaussian interaction, allowing an analytical calculation of in terms of ; the CFL Fermi surface was found to be less anisotropic than that of the band at B = 0.
The DMRG techniques used in this work have been described elsewhere <cit.>, so we provide only a brief summary here. We work on an infinite cylinder, on a spin-polarized system projected to the lowest Landau level.
The direction along the cylinder is x, the one around the circumference is y.
DMRG is a variational technique within the ansatz of `matrix product states' (MPS), states with a limited amount of entanglement entropy.
The entanglement entropy is limited by the bond dimension, χ, of the MPS.
We project to the lowest Landau level by working in a basis containing only the lowest energy eigenstates of the Hamiltonian in Eq. (<ref>), which depend on the anisotropy .
We increase the bond dimension until the convergence of our wavefunctions is no longer the limiting factor on the accuracy of our results. This is achieved in most cases for χ between 3000 and 6000.
After obtaining approximate ground states with the DMRG, we compute the guiding center structure factor:
S()≡⟨ρ()ρ(-)⟩,
where ρ() is electron density in momentum space, as a function of the wavevector . The infinite cylinder geometry quantizes q_y in steps of 2π/L_y, (L_y is the circumference) but allows q_x to take continuous values.
Since the density of states of a Fermi liquid has a singularity at the Fermi wavevector, S() will also be singular whenever the corresponds to a scattering process between different parts of the Fermi surface.
By computing S() and locating these singularities, we can determine the shape of the Fermi surface.
Since the states obtained by DMRG are approximations of the true ground states, these singularities will not be reproduced perfectly; however, for the numerically accesible bond dimensions χ, the singularities are sharp enough that the wave vectors can be identified with minimal uncertainty <cit.>.
Fig. <ref> shows an example of how S() allows us to map out the Fermi surface.
In Fig. <ref>(a) the ellipse represents the anisotropic Fermi surface we are trying to investigate. The horizontal gray lines represent the allowed values of momenta in our cylinder geometry.
If we fix q_y=0, the horizontal arrows represent the values of q_x where we expect a singularity in S(q_x,0).
We can see such singularities in the data of Fig. <ref>(b). As a check, we can find singularities at other values of q_y, corresponding to arrows with a vertical component.
This analysis allows us to find the set of intersection points of the gray lines in Fig. <ref>(a) and the edges of the Fermi surface for any given Fermi contour anisotropy at B = 0.
We consider values of the Fermi contour anisotropy ranging from 0.16 to 6.25. (This corresponds to mass anisotropy range 0.025 to 40).
The dynamic range in is limited by convergence of the DMRG algorithm. Very small values of (≪ 1) increase the correlation length along the circumference of the cylinder, giving rise to increasing finite-size effects. Conversely, very large values of (≫ 1) increase correlations along the axis of the cylinder, requiring larger values of the bond dimension χ for convergence. This causes a rapid increase in computational time, which provides the limiting factor in that situation. While the dynamical range of is thus limited numerically, we emphasize that the range covered is significantly larger than that covered in experiment.
We consider first the experimentally relevant case of Coulomb (1/r) interactions between the electrons.
In order to avoid the effects of the electron-electron interaction wrapping around the cylinder for such a long range interaction, we impose a Gaussian cutoff by multiplying our interactions by e^-r^2/λ^2. We take λ=6ℓ_B (ℓ_B is the magnetic length), a sufficiently large value that this cutoff does not affect our results.
At each value of , we perform the procedure described in the previous section at several values of L_y (3 to 5 distinct values in the range of 13 to 27 magnetic lengths, depending on ).
This provides a list of coordinates of points which are expected to fall near the 2D Fermi contour.
Examples representative of the cases <1, = 1 and >1 are shown in Fig. <ref>.
We then extract by fitting the resulting data to an ellipse.
Though in the thermodynamic limit we expect an elliptical Fermi contour, our finite-size data points will deviate from this contour due to Luttinger's theorem, which in our system fixes the sum of all q_x values of scattering processes with q_y=0 to ν L_y.
This will lead to an error in the anisotropy of our fitted ellipse because of the finite size of our samples.
We can estimate this error by considering the = 1 case, where =1.
Our estimate based on the total data from three system sizes is = 1.002. However, randomly removing one system size from the dataset causes the estimate to fluctuate between 0.98 and 1.01.
On the basis of this result, we believe that at each value of , the uncertainty on due to Luttinger's theorem and to the finite number of system sizes considered is of order 1-2%.
We fit the discrete set of values of (q_x,q_y) obtained this way to an ellipse of area π from which we obtain our estimate of .
For an infinite size system in the planar limit, a π/2 rotation implies an exchange of the major and minor axes of the elliptical Fermi surface, i.e. a change of anisotropy from to 1/, with a corresponding change of to 1/. This implies that
() (1/) = 1,
or equivalently that log() is an odd function of log(), which can be Taylor expanded [Assuming is analytic around the isotropic point = 1.] around the isotropic point = = 1, to yield:
log () = γlog() + μlog^3 () + ...
We find that the simplest such function, a power-law = ^γ, which corresponds to terminating the series at the first term in Eq. (<ref>), already fits the data well in the anisotropy range we explore (Fig. <ref>).
The value of γ we get is γ = 0.493 ± 0.008, close to a square-root dependence = √()
[ An empirical fit to our data of the power-law form with adjustable exponent and prefactor, namely = k ^γ, yields γ = 0.498 ± 0.010 and k = 0.983 ± 0.015.
This form breaks the symmetry under rotation by π /2; this could be expected due to finite size and different boundary conditions (and treatments) in the x- and y- directions.
Nevertheless, the exponent is very close to the one obtained imposing the symmetry, and k is close to unity as required for the infinte system. ].
Remarkably, experiments on holes in GaAs under application of in-plane strain show data in agreement with this result <cit.>.
In order to check whether this relation is a universal feature of power-law interactions, we replace the Coulomb interaction with a dipolar interaction V(r) ∝ r^-3 (also in Fig. <ref>), which could be realized in a cold atomic system <cit.>.
We find again that a single power law (first term in Eq. (<ref>) ) fits the data well, but we measure an exponent γ = 0.795 ± 0.005.
This is unambiguously different from a square-root dependence, and in particular implies that is much closer to than for the case of Coulomb interaction.
We can think of as resulting from a competition between the non-interacting part of the Hamiltonian, with anisotropy , and the interacting part of the Hamiltonian, which is isotropic.
It appears reasonable that when the Coulomb interaction is replaced with the dipolar interaction, which is weaker at long distances, the anisotropy moves closer to .
Finally, we benchmark our method against the only known exact result for :
Yang's prediction <cit.> that for a Gaussian electron-electron interaction V(r) = V_0 e^-r^2/2s^2 one has
= √(ℓ_B^2 + s^2/ℓ_B^2 / + s^2) .
For this purpose, we pick two nearly reciprocal values of the electron anisotropy, = 2.25 and = 0.445, and compute with our method at different values of s of the order of a magnetic length ℓ_B.
The results displayed in Fig. <ref> show a good agreement with the prediction Eq. (<ref>).
Our method appears to slightly underestimate by 1 to 2%.
This small bias however should not significantly affect our estimates for the exponent γ for the power-law interactions.
In summary,
we have numerically computed the composite Fermi surface of a half-filled lowest Landau level for a two-dimensional electron gas with varying mass anisotropy, with three different forms of the electron-electron interaction.
We find that the anisotropy of the Fermi surface of the composite Fermi liquid () is less than that of that of the zero field Fermi surface () for non-interacting electrons.
When the electrons in the system interact via a Coulomb interaction, our data follows the relation =√(). This result is in agreement with recent experimental data <cit.>, but not with some earlier theoretical work <cit.>.
The relationship between and does however depend on the form of the electron-electron interaction. For example, we find a larger composite fermion anisotropy for the 1/r^3 interactions, and for a Gaussian interaction we find results consistent with the exact calculation in Ref. Yang2013.
Though experiments <cit.> find a similar relation between α_F and α_CF as we do, there are some differences between the two systems studied
– experiments are conducted on quantum wells with finite width
[This would soften the Coulomb interaction at short distances;
preliminary calculations suggest that introducing a well width of ∼ 2ℓ_B would affect γ by a few percent.],
and the experimental Fermi surface has a more complicated form than the elliptical one considered here.
Performing simulations on a system closer to experiment is therefore an interesting extension of our research.
Little is known about the response of the quantum Hall fluid to generalizations of Eq. (<ref>) (e.g. if quartic terms are added), and studies of such systems could help spur theoretical progress.
More generally, the larger system sizes accessible in our DMRG calculations could also allow us to improve the results of previous exact diagonalization studies <cit.>.
Acknowledgements:
We acknowledge helpful conversations with Insun Jo, Mansour Shayegan and Akshay Krishna.
We thank R. Mong and M. Zaletel for creating and providing the DMRG libraries used in this work; S. D. G. also acknowledges previous collaborations with them.
This work was supported by Department of Energy BES Grant DE-SC0002140.
|
http://arxiv.org/abs/1701.07707v2 | 20170126140802 | Analogy and duality between random channel coding and lossy source coding | [
"Sergey Tridenski",
"Ram Zamir"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
1.6
|
http://arxiv.org/abs/1701.08065v2 | 20170127143849 | Control of superconductivity with a single ferromagnetic layer in niobium/erbium bilayers | [
"N. Satchell",
"J. D. S. Witt",
"M. G. Flokstra",
"S. L. Lee",
"J. F. K. Cooper",
"C. J. Kinane",
"S. Langridge",
"G. Burnell"
] | cond-mat.supr-con | [
"cond-mat.supr-con"
] |
School of Physics and Astronomy, University of Leeds, Leeds, LS2 9JT, United Kingdom
ISIS Neutron and Muon Source, STFC Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, OX11 0QX, United Kingdom
School of Physics and Astronomy, University of Leeds, Leeds, LS2 9JT, United Kingdom
School of Physics and Astronomy, SUPA, University of St Andrews, St Andrews KY16 9SS, United Kingdom
School of Physics and Astronomy, SUPA, University of St Andrews, St Andrews KY16 9SS, United Kingdom
ISIS Neutron and Muon Source, STFC Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, OX11 0QX, United Kingdom
ISIS Neutron and Muon Source, STFC Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, OX11 0QX, United Kingdom
ISIS Neutron and Muon Source, STFC Rutherford Appleton Laboratory, Chilton, Didcot, Oxon, OX11 0QX, United Kingdom
g.burnell@leeds.ac.uk
School of Physics and Astronomy, University of Leeds, Leeds, LS2 9JT, United Kingdom
Superconducting spintronics in hybrid superconductor–ferromagnet (S–F) heterostructures provides an exciting potential new class of device. The prototypical super-spintronic device is the superconducting spin-valve, where the critical temperature, T_c, of the S-layer can be controlled by the relative orientation of two (or more) F-layers. Here, we show that such control is also possible in a simple S/F bilayer. Using field history to set the remanent magnetic state of a thin Er layer, we demonstrate for a Nb/Er bilayer a high level of control of both T_c and the shape of the resistive transition, R(T), to zero resistance. We are able to model the origin of the remanent magnetization, treating it as an increase in the effective exchange field of the ferromagnet and link this, using conventional S–F theory, to the suppression of T_c. We observe stepped features in the R(T) which we argue is due to a fundamental interaction of superconductivity with inhomogeneous ferromagnetism, a phenomena currently lacking theoretical description.
Control of superconductivity with a single ferromagnetic layer in niobium/erbium bilayers
G. Burnell
December 30, 2023
=========================================================================================
§ INTRODUCTION
While traditionally considered competing phenomena, when artificially juxtaposed, there is a wealth of physics at the interface between superconductors (S) and ferromagnets (F). Taking advantage of the competition between order parameters has lead to advances in the emerging field of super-spintronics <cit.>. By placing an inhomogeneous magnetic texture at the S–F interface, it is possible to create the so-called long ranged triplet component (LRTC) or finite spin Cooper pair. Unlike the singlet Cooper pair, the LRTC is not dephased by the exchange field and can therefore penetrate further into a proximitised F-layer. This opens the exciting possibility of performing spintronic logic operations on a dissipationless spin current <cit.>. Additionally, several breakthroughs in complex S–F heterostructures show promise as potential cryogenic memory elements. In such a scheme, information could be stored by the state of the system (superconducting or normal) <cit.> or the ground-state phase difference between two S-layers in an S/F/S Josephson junction <cit.>.
The prototypical super-spintronic device is the superconducting spin valve. In this device, control of the magnetic state of the two F-layers in an S/F/F or F/S/F heterostructure can be used to tune the generation of the LRTC <cit.>. The generation of the LRTC opens an additional conduction channel for Cooper pairs, resulting in the lowering of T_c <cit.>. In our previous work we found this suppression of T_c to be of the order 10–20 mK in a 3d ferromagnet/niobium device <cit.>, although other works have increased this effect; to 130–140 mK by carefully engineering both the superconducting layer and S–F interface <cit.>, and to over 1 K by both introducing a half-metal as the bottom F-layer and changing the applied field orientation from an in-plane rotation to an in-plane to out-of-plane rotation <cit.>. The manipulation of the F-layers in the superconducting spin valve requires careful engineering of the heterostructure and the rotation of the sample in an applied magnetic field. Under an in-plane field rotation it is possible to introduce experimental artefacts due to: vortex flow (if too high a current is applied, induced voltage from vortex flow will dominate the transport signal); non-uniformity of field (if the sample is not aligned correctly the out-of-plane field component will vary under rotation - modifying T_c); and temperature (a temperature gradient or local source of heating inside a cryostat is an important consideration when the sample is moving during measurement). Any of these can introduce a signal with the same periodicity as the signature of LRTC generation. A recent theoretical work considered that there exists a “half-select" problem in the multilayer spin valve approach, which may be negated in a simplified device <cit.>.
In this work we describe a simplified S–F hybrid system, where the superconductivity can be controlled by a single adjacent F-layer. The system only requires the ability to apply an external field in one direction (without the need for sample rotation) and we perform all our measurements in zero applied field, two distinct advantages over the superconducting spin valve. This is achieved by coupling a superconducting Nb layer to rare-earth ferromagnetic Er, which has a large number of metastable magnetic phases accessible with temperature or applied field. Previous work on holmium and dysprosium demonstrate the important role rare-earth ferromagnets will play in the implementation of superconducting spintronics in Josephson type devices <cit.> and devices based on the control of T_c <cit.>. For example, Gu et al. demonstrated that an antiferromagnetic to ferromagnetic transition in Ho resulted in modification of the T_c of an adjacent Nb layer of over 100 mK, however the exact mechanism involved in the T_c shift was not established <cit.>. This work was later expanded by producing trilayer samples of Ho/Nb/Ho and Dy/Nb/Dy in which a spin valve like effect of 400 mK was discovered <cit.>. These works established the ability to control T_c with the ferromagnetic texture in rare-earth ferromagnets, however lacked the theoretical description and the additional modification to the shape of the R(T) transition reported in this manuscript.
Er is a trivalent rare-earth metal (Z=68), with highly localised 4-f electrons and a hexagonal close packed (hcp) crystal structure.
Competition between the RKKY indirect exchange interaction and the crystalline anisotropy, creates a rich magnetic phase diagram making this material ideal for the exploration of S–F proximity effects <cit.>.
Below the high-temperature paramagnetic phase (≈85 K), Er first gains a sinusoidal, c-axis modulated (CAM) anti-ferromagnetic phase. As the temperature is lowered, the magnetic wave vector of the CAM expands until ≈52 K. Below this temperature Er enters an `intermediate' phase where the in-plane moments begin to order creating what has been referred to by Cowley et al. as an anti-ferromagnetic “wobbling cycloid” <cit.>. The magnetic cycloid repeat distance increases with decreasing temperature, through a number of stable commensurate phases, to 8 atomic layers. These states exhibit a ferrimagnetic moment. Finally, below 18 K a conical c-axis ferromagnetic phase is formed. We have been able to confirm that many of the magnetic states of bulk Er are reproducible in sputter deposited epitaxial thin films, and that these magnetic states can be controlled with either temperature or applied magnetic field <cit.>.
§ METHODS
The films were deposited using DC sputtering in a system with substrate heaters mounted above each sample slot. At the highest temperature, the base pressure of the system is ≈ 10^-7 mbar. This pressure improves as the system temperature is lowered. The samples were grown on 0.65 mm thick c-plane Al_2O_3 substrates. The Nb was deposited at a nominal temperature of 700^∘C, after which the system was cooled to 500^∘C, a final thin Nb interface layer was deposited at this temperature, followed directly by the Er and then a 5 nm-thick Lu capping layer. Growth was performed at a typical Ar flow of 55 sccm resulting in Ar pressure of 2-3 μbar, at a substrate–sample distance of 75 mm, and at a typical growth rate of 0.1 nm s^-1. Growth rates were calibrated by fitting to Keissig fringes obtained on single layer samples by X-ray reflectometry. The Nb was grown first as it has been shown to be an effective buffer layer for the growth of rare-earth metals and stops the Er layer reacting with oxygen in the substrate <cit.>. The Nb/Er interface is known to be sharp due to the lack of alloying and intermixing between Nb and Er <cit.>. The Er grows epitaxially on the most densely packed Nb (110) plane, in the Nishiyama-Wasserman orientation. The in-plane axis of hcp Er [101̅0] is aligned with bcc Nb [1̅10] with 3:4 supercell commensuration in their nearest-neighbour distances along these axes <cit.>. Lu was chosen for the capping layer as it lattice matches well with Er (preventing additional strain being introduced), and unlike some traditional capping metals, such as Au, it can be deposited as a continuous layer at high temperatures <cit.>.
Magnetization loops and remanent magnetization, M_r, were measured using a 6 T Quantum Designs SQUID-VSM magnetometer at 10 K. Electrical transport measurements were performed on sheet films using a conventional four point probe measurement configuration and employing two continuous flow ^4He cryostats, with maximum fields of 3 T and 8 T. The field histories were only applied when the sample was in the normal state (to prevent flux trapping). The resistance as a function of temperature (R(T)) of the sample, from which T_c is obtained, was always measured at zero applied field. Temperature sweeps, both cooling and warming, were recorded to check for temperature hysteresis in the measurements. The temperature hysteresis (observable in FIG. <ref>) does not account for the observed T_c shift in FIG. <ref>.
§ RESULTS
§.§ Magnetic Characterization
The magnetization versus field data, along with minor loops, for the Nb(20 nm)/Er(25 nm) bilayer sample at 10 K are shown as the inset in FIG. <ref> (a). The red squares show the initial magnetization and full magnetic hysteresis behaviour for applied magnetic fields up to 60 kOe. The solid lines are a series of minor loops, from which information about the M_r of the sample can be obtained. The M_r as a function of initial field data are collated in FIG. <ref> (a).
It is evident from FIG. <ref> (a) that for low initial fields, there is little change to the remanent state of the Er. This indicates that, in this range, the stabilisation of the spiral magnetic structure in the Er—due the RKKY interaction—is robust against perturbation by the externally applied magnetic fields. The large increase in M_r for initial fields of about 10 kOe is evidence that, for initial field values greater than this, the Er does not re-enter the same magnetic phase upon relaxation of the field. This is consistent with previous characterization work which shows that at approximately 25 kOe there is a phase transition, for an in-plane applied field, into a distorted spiral phase, known as the `fan' or `canted-fan' state <cit.>. Possible origins of the increased remanence are shown schematically in FIG. <ref> (b) and discussed further in section IV A.
§.§ Electrical Transport
FIG. <ref> shows the resistance as a function of temperature for the Nb(20 nm)/Er(25 nm) bilayer sample, always measured in zero applied field. The data show the onset of superconductivity as the temperature is decreased for the virgin state (triangles) and after the application of an 80 kOe applied magnetic field (squares). In the inset of FIG. <ref> the evolution of R(T) as a function of the initial applied magnetic field can be seen. Resistance as a function of temperature was always measured in zero applied magnetic field and T_c was defined as 50 % of the normal state resistance. Δ T_c is calculated as the difference between T_c of the virgin state (triangles in FIG. <ref>) and T_c after the application and removal of a magnetic field. The Δ T_c data for all of the initial applied magnetic field values are collated in FIG. <ref>.
In FIG. <ref> it is immediately clear that there is a strong link between the M_r of the Er and the T_c of the superconductor. This correlation, between the properties of Er and Nb, show that both the T_c of the Nb and the magnetic state of the Er are strongly dependent upon the field history of the sample. It also shows that there is a strong coupling between the superconducting and magnetic layers. The largest change to Δ T_c comes between 20–30 kOe, which, as mentioned above, is also the field value where the Er state changes magnetic phase. After the application of the largest field possible in our system, 80 kOe, the T_c of the Nb was suppressed by approximately 140 mK, which is the largest value reported for such a system. The metastable magnetic state obtained by applying and removing and initial field, is robust against temperature changes in the measured range 5-10 K. The system can be effectively reset to the virgin state by warming through the Curie temperature.
One additional point to note is the step-like features that are present in FIG. <ref>, and the fact that these steps also change with field history. The height of the stepped feature in the transition is marked on FIG. <ref>. A step fraction is defined as the height of the step relative to the normal state resistance (at 10 K). The collated step fraction is plotted in the upper panel of FIG. <ref> and is discussed further in section <ref>.
§ MODELLING
§.§ Modelling of M_r: An Effective E_ex
Having established that the suppression observed in T_c is linked to the increased M_r, we now consider the local magnetic state of the Er film and propose two physical interpretations for the origin of M_r in Er. The first being a `bulk' modification of the spiral and the second a localised spin alignment at the Er surface.
As we have shown in our previous work, even in the thin film, Er has a highly complicated phase diagram <cit.>. Through a combination of temperature and field, the Er can be placed in a number of metastable magnetic states. For an in-plane field, between 20 and 30 kOe Er undergoes a transition from the conical to a `fan' magnetic state, canted into the direction of applied field. Subsequent removal of the applied field causes the Er to re-enter the conical state, however we argue the cone has now been modified and is canting in the direction of the external applied field, increasing M_r. This canted conical state is shown schematically as the top mechanism in FIG. <ref> (b).
It is well known that finite-size effects play an important part for thin film rare-earths <cit.>. The long-range nature of the RKKY interaction (up to 6th nearest neighbour) means that the reduced atomic coordination at the surfaces makes the spiral ends less robust against external perturbation, which clearly becomes more of an influencing factor for thinner films with a lower volume to surface area ratio <cit.>. It is, therefore, possible that, under the influence of an externally applied magnetic field, the spiral unwinds more readily at the surfaces and, being unable to overcome the energy barrier to reform the spiral, remains locally in the direction of the applied field. This is shown schematically as the bottom mechanism in FIG. <ref> (b). We calculate from the known thickness, saturation magnetization and expected moment per atom that 0.65 nm (or just over two atoms) remaining aligned at the interface would account for the observed M_r (see SI FIG 1).
The `bulk' canted magnetic phase which leads to a net magnetization could be described by an effective exchange field if the coherence length inside the Er is (much) longer than the magnetic repeat unit of the helix, which is about 4 nm. On the other hand, the contribution of the surface moments to the total proximity effect is only considerable if the coherence length is short, comparable to the 0.65 nm effective Er thickness corresponding to aligned surface moments. The two mechanisms thus correspond to very different length scales of the coherence length inside the Er layer.
Resistance measurements on Er/Nb bilayers with various Er layer thicknesses suggest an approximate coherence length of 10 nm (see SI FIG 2) and in a related Ho system the coherence length was estimated to be 30 nm <cit.>. This distance is far greater than 0.65 nm and is long enough to allow the Cooper pair to experience multiple helicies. It is, therefore, most likely that the Er undergoes the `bulk' transition to a new canted magnetic phase, which is retained at zero field, and that this is the origin of the suppression in T_c.
§.§ Modelling of Δ T_c vs. M_r
To investigate the effect of an increased bulk remanent magnetisation, we model it as an effective exchange field (J_z) in the F layer, which we can then link directly to the suppression in T_c.
Using the quasiclassical theory for superconductivity in the dirty limit (electronic mean free path much shorter than the phase coherence length), we calculate the critical temperature of a Er/Nb bilayer as function of an effective exchange field inside the Er. We take the x-axis normal to the metallic layers and assume translational invariance in the y,z plane. The Usadel equation for s-wave superconductivity then takes the form iħ D∂_xǧ∂_xǧ = Ȟ,ǧ with ǧ the 4× 4 matrix Green function in the Nambu spin space, ħ the reduced Planck constant and D the diffusion constant. For collinear exchange fields the Hamiltonian can be described by Ȟ = iħω_nτ_3⊗σ_0 + Δ̌ - J_zτ_0⊗σ_3 (see e.g. <cit.>) with J_z the exchange field directed along the z-axis and ω_n the Matsubara frequencies defined by ħω_n = π k_B T2n+1 with k_B the Boltzmann constant, n integer, and the maximum allowed frequency given by the Debye frequency. x-y-z is defined such that J_z points along the direction of the net moment of the Er. Furthermore, σ_i and τ_i are the Pauli matrices of respectively the spin space and Nambu space. The matrix Green function and Δ̌ have the following non-zero elements:
ǧ = ([ G_ 0 0 F_; 0 G_ F_ 0; 0 F_ G_ 0; F_ 0 0 G_ ])Δ̌ = ([ 0 0 0 -Δ; 0 0 Δ 0; 0 -Δ^* 0 0; Δ^* 0 0 0 ])
where G and F are the quasiclassical normal and anomalous Green functions respectively, both functions of (x,ω_n), and Δx is the order parameter. The matrix Green function satisfies the normalization condition ǧ^2 = 1̌ and the order parameter must be solved selfconsistently satisfying the gap equation:
iΔR = π k_B T/lnT/T_c0 + ∑_n1/2n+1∑_ω_nF_R,ω_n
with T_c0 the bulk critical temperature. We use the interface boundary conditions as formulated by Nazarov<cit.> which for the interface between two materials with labels l,r for the layer on the left and right side of the interface respectively can be written as:
σ_lǧ_l∂_xǧ_l = σ_rǧ_r∂_xǧ_r and σ_lǧ_l∂_xǧ_l = 2/R_bǧ_l,ǧ_r/4 + Γǧ_lǧ_r+ǧ_rǧ_l-2, with σ_i the conductivity of layer i, 0≤Γ≤1 the interface transparency and R_b the interface resistance times the interface area (Ω m^2).
The material parameters used for the Nb layer are ξ_s=√(ħ D_s/(2π k_BT_c0))=7.9 nm, T_c0=8.4 K and ρ_s=15.2 μΩ cm. Since the value of J_z is unknown we explored various combinations of ξ_f=√(ħ D_f/J_z), J_z and R_b chosen such that the T_c of the bilayer corresponds to the experimental value of 5.5 K. For all calculations Γ=1. For each material combination T_c was calculated as a function of a shift in J_z (a shift of zero corresponding to the T_c of 5.5 K).
The results of the modelling are presented in FIG. <ref> along with the experimental data. When taken with the thickness dependence, SI FIG. 2, it is parameter set 2 and 5 which show closest agreement to the experimental data, although all parameter sets considered qualitatively reproduce the experimental data. These two parameter sets give the same value of interface resistance, but were considered with different initial values of J_z, set 2 corresponding to the lowest temperature (≈ 22 K) conical ferromagnetic transition, and set 5 the transition from the antiferromagnetic to ferromagnetic “wobbling cycloid” intermediate state (≈ 55 K), both of which have been confirmed in our thin films <cit.>. From either starting point, the analysis shows that the observed 140 mK T_c shift corresponds to a 5-10 K shift in J_z (7.5-15 × 10 ^-20 meV).
§ DISCUSSION
Rare earth ferromagnets, such as Er, offer a plethora of magnetic configurations in which a Cooper pair (coherent with a neighbouring proximity coupled superconductor) can experience magnetic disorder. As a conical ferromagnet, Er is a theoretically ideal system in which to generate and study proximity effects induced by an additional LRTC <cit.>. In this work it is expected that all R(T) measurements were performed when the Er was in a disordered magnetic state. It is therefore not possible to directly attribute LRTC generation to the observed Δ T_c. In comparison to the superconducting spin valve, which has a clean LRTC on/off mechanism (as magnetic inhomogeneity is carefully engineered from otherwise homogeneous magnetic layers), our proposed origin of M_r cannot provide such a switching mechanism. The canting of the magnetic state into the direction of applied field is unlikely to significantly change the conversion efficiency of singlet Cooper pairs into the LRTC. In the second proposed mechanism, spins at the surface remain aligned with the applied field, and could create a homogeneous interface layer. From the spin valve experimental argument we would expect this to result in a decrease in LRTC generation and therefore an increase in T_c. This does not agree with the experimental observation in this work.
Given the size of this T_c effect is generally larger than that reported for spin-valves (and the number of reported cases showing an effect opposite to the spin-valve effect, where the disordered magnetic state results in a higher measured T_c <cit.>), we urge caution for the interpretation of T_c measurements alone as evidence for the presence of the LRTC in S–F systems.
With the modelling we have shown that the reported change in T_c can be described within the conventional S–F proximity theory by considering the increasing remanence of the Er as a shift in the effective exchange field. This increase in exchange field modifies the proximity effect, suppressing the T_c of the bilayer. An effective shift in J_z of 5-10 K accounts for the observed changes in T_c.
In the transition curves, shown in FIG. <ref>, three step-like features can be seen. The first, present at ≈8 K, appears to be directly related to the T_c of the bare Nb film. This is evidence of local regions of the bilayer film where the Er has no direct influence on the Nb, that is, where the two materials are not coupled by the proximity effect. One possibility for this is at the Er grain boundaries or local regions around the wire-bond contacts, where the force of the contact may have disrupted the Nb/Er interface. This interpretation is supported by the observation that there is no significant field history dependence of this step.
Some common explanations for step-like features in the transition curves can be ruled out for our system. The sputtering technique employed in this work is unlikely to create a significant thickness gradient. To check the uniformity of the films, a 20×20 mm film was diced into several pieces and X-ray reflectivity was performed. A 5% variation in thickness was observed, this variation is only slightly greater than the error in individually calculated thickness by fitting to Keissig fringes. By comparison, the sample size for transport measurements was only 3×3 mm, where uniformity in film thickness will be very high. Crystallographic inhomogeneity is a further possibility, but again unlikely. We examined a possible current (heating) dependence of the step-like features using currents ranging from 100 nA up to 1 mA, but found no such evidence and current induced local heating can thus be ruled out. There are no known Nb-Er alloys and in our previous reflectivity work we observed no evidence for intermixing at the interface <cit.>, which if possible could have altered the superconducting properties of the Nb. Poor interface transparency can cause anomalous features in resistivity around the superconducting transition, as current paths change to flow preferentially through the superconductor. The formation of an oxide barrier at the interface would cause such effects, but the calculated oxidation time in our vacuum of 15 minutes is far longer than the 20 seconds between the final Nb and Er layer depositions. The steps are never observed in either single layer Nb films (deposited under identical growth conditions), or films of Nb grown in proximity to a homogeneous ferromagnet such as Co (see for example <cit.>).
Step-like features have been observed previously in works coupling BCS superconductors to inhomogeneous ferromagnetic textures. For example Witt et al. in helical Ho/Nb bilayers <cit.>, Yi Zhu et al. in GdN/Nb/GdN spin valves <cit.>, and L.Y. Zhu et al. in striped domain (Co/Pt)_n/Nb multilayers where it appears that the step shape can be modified by defining a current path parallel (no inhomogeneity – no step) or perpendicular (inhomogeneity – step) to the stripe domains <cit.>.
While the exact origin of the step is unknown, it appears linked to the S–F proximity effect in all examples above. In this work we observe, in the upper panel of FIG. <ref>, that the height of the step as a fraction of the transition is field history dependent. This step height change occurs at a different field than the largest changes in M_r and Δ T_c. While the change in step height does not appear to be intrinsically linked to the change in T_c, it is still clearly linked to the magnetic state of the Er layer. This further supports that the origin of the step is a fundamental feature of the S–F proximity effect, requiring theoretical description.
§ CONCLUSIONS
In summary, the remanent state of Er, when proximity coupled to a Nb superconductor, can have a strong influence on T_c. The application of magnetic field is able to change the metastable magnetic state of the Er from a conical to fan state. This modification results in a fundamental change to the shape and temperature of the superconducting transition to zero resistivity.
We hope the observation of this unconventional effect proves fruitful for refinement of S–F theory, particularly with the lack of current description of the stepped transition observed in this (and many similar) systems.
A shift in T_c of 140 mK is much larger than previously observed for singlet domain wall effects and is comparable to the largest observed by the generation of the LRTC in the context of the superconducting spin valve with 3d ferromagnets. This system fulfils the requirements for cryogenic memory based upon the proposed architecture of Oh et al. <cit.>, and we offer this materials system as a candidate for future super-spintronic device application.
The authors would like to thank the UK EPSRC (grant numbers: EP/J010634/1, EP/J010650/1, EP/I031014/1 and EP/J01060X/1) for their financial support. NS acknowledges JEOL Europe and ISIS neutron and muon source for PhD funding.
The data associated with this paper are openly available from the University of Leeds data repository. https://doi.org/10.5518/142
|
http://arxiv.org/abs/1701.07424v3 | 20170125185203 | Gravitational waves 100 years after Einstein | [
"Luc Blanchet"
] | physics.pop-ph | [
"physics.pop-ph",
"gr-qc"
] |
rsfs
OMScmsymn
|
http://arxiv.org/abs/1701.07886v1 | 20170126215103 | 3D Modelling of the climatic impact of outflow channel formation events on Early Mars | [
"M. Turbet",
"F. Forget",
"J. W. Head",
"R. Wordsworth"
] | astro-ph.EP | [
"astro-ph.EP"
] |
A comparison of two methods for estimating black hole spin in active
galactic nuclei
Daryl Haggard
December 30, 2023
====================================================================================
Mars was characterized by cataclysmic groundwater-sourced surface flooding that formed large outflow channels and that
may have altered the climate for extensive periods during the Hesperian era.
In particular, it has been speculated that such events could have induced
significant rainfall and caused the formation of late-stage valley networks.
We present the results of 3-D Global Climate Model simulations reproducing the
short and long term climatic impact of a wide range of outflow channel formation events under cold ancient Mars conditions.
We find that the most intense of these events (volumes of water up to 10^7 km^3 and released at temperatures up to 320 Kelvins) cannot
trigger long-term greenhouse global warming, regardless of how favorable are the external conditions (e.g. obliquity and seasons).
Furthermore, the intensity of the response of the events is significantly affected by the atmospheric pressure,
a parameter not well constrained for the Hesperian era.
Thin atmospheres (P < 80 mbar) can be heated efficiently because of their low volumetric heat capacity,
triggering the formation of a convective plume that is very efficient
in transporting water vapor and ice at the global scale. Thick atmospheres (P > 0.5 bar)
have difficulty in producing precipitation far from the water flow area, and are more efficient in generating snowmelt.
In any case, outflow channel formation events at any atmospheric pressure are unable
to produce rainfall or significant snowmelt at latitudes below 40^∘N.
As an example, for an outflow channel event (under a 0.2 bar atmospheric pressure and 45^∘ obliquity) releasing
10^6 km^3 of water heated at 300 Kelvins and at a discharge rate of 10^9 m^3 s^-1,
the flow of water reaches the lowest point of the northern lowlands (around ∼ 70^∘N, 30^∘W)
after ∼ 3 days and forms a 200m-deep lake of 4.2×10^6 km^2 after ∼ 20 days;
the lake becomes entirely covered by an ice layer after ∼ 500 days. Over the short term, such an event leaves 6.5×10^3 km^3 of
ice deposits by precipitation (0.65% of the initial outflow volume) and can be responsible for the melting of
∼ 80 km^3 (0.008% of the initial outflow volume; 1% of the deposited precipitation).
Furthermore, these quantities decrease drastically (faster than linearly) for lower volumes of released water.
Over the long term, we find that the presence of the ice-covered lake has a climatic impact similar to
a simple body of water ice located in the Northern Plains.
For an obliquity of ∼ 45^∘ and atmospheric pressures > 80 mbar, we find that the lake ice is transported
progressively southward through the mechanisms of sublimation and adiabatic cooling.
At the same time, and as long as the initial water reservoir is not entirely sublimated
(a lifetime of 10^5 martian years for the outflow channel event described above),
ice deposits remain in the West Echus Chasma Plateau region where hints of hydrological activity
contemporaneous with outflow channel formation events have been observed. However,
because the high albedo of ice drives Mars to even colder temperatures,
snowmelt produced by seasonal solar forcing is difficult to attain.
§ INTRODUCTION
During the Late Hesperian epoch of the history of Mars (about 3.1-3.6 Gyrs ago; <cit.>),
the large outflow channels observed in the Chryse Planitia area are
thought to have been carved by huge water floods caused by catastrophic and sudden
release of groundwater <cit.>.
It has been speculated that such events could have warmed the climate
and possibly explain the contemporaneous
formation of dendritic valley networks observed in the nearby Valles Marineris area
and on the flanks of Alba Patera, Hecates Tholus, and Ceraunius Tholus,
and that have been interpreted to be precipitation-induced <cit.>.
Although the Late Hesperian epoch is thought on the basis of geology and mineralogy
to have been cold <cit.>,
the characteristics of these dendritic valleys suggest that
the valleys were formed under persistent warm conditions (.e.g, ).
First, their high degree of branching is interpreted to indicate formation by precipitation.
Second, their high drainage densities - evidence of their high level of maturity -
and the presence of inner channels favor the presence of
stable liquid water for geologically long periods of time <cit.>.
Third, sedimentary morphologies observed in the
region of Valles Marineris <cit.> suggest a fluvial and lacustrine environment.
Under this hypothesis, the warm liquid water floods that formed the outflow channels would inject water vapor into the atmosphere,
a powerful greenhouse gas that could trigger a significant warming period possibly leading to long lasting
pluvial activity (rainfall).
In this paper, we use a 3-Dimensional Global Climate Model (LMD GCM) to explore the global climatic impact
of outflow channel water discharge events on a Late Hesperian Mars over a range of temperatures and atmospheric pressures.
These bursts of warm liquid groundwater outflows onto the surface
can trigger strong evaporation, possibly leading to global climate change.
How warm and how wet was the atmosphere of Late Hesperian Mars after such major outflow channel events?
The climatic effect of relatively small and
cool groundwater discharges has been studied on a regional scale <cit.>
and localized precipitation is indicated.
In this contribution, we investigate the climatic impact
at a global scale of a wide range of possible outflow channel events,
including the case of the most intense
outflow events ever recorded on Mars <cit.>.
Our work focuses on both (1) the direct short-term climate change
due to the initial strong evaporation of water vapor
and (2) the long-term change of the water cycle due to the
presence of liquid water and ice at non-stable locations.
When a warm liquid water flow reaches the surface,
strong evaporation occurs and the total evaporation rate increases with
the temperature and the surface area of the flow.
In term of energy budget, a 300K warm liquid water flow can
potentially convert ∼ 5% of its total mass into water vapor before freezing starts.
The injected water vapor will have a major role on the radiative budget of the planet.
First, water vapor is a greenhouse gas that can absorb ground thermal infrared emission efficiently.
Second, water vapor can condense to form clouds. In the process, large amounts of latent heat can be released in the atmosphere.
The clouds can reflect the incoming solar flux as well as contribute to an additional greenhouse effect,
depending on their height and the opacity of the background atmosphere, which depends on the total atmospheric pressure.
To study the global climatic effect of localized outflow channel events,
3D Global Climate Models are particularly relevant because they not only model
the physical processes described above, but also the 3D dynamical processes that
play a major role in climatic evolution.
In particular, we show in this paper that 3D dynamical processes (horizontal advection, in particular)
are key to understanding the relaxation timescale of the Late Hesperian martian atmosphere
immediately following major outflow channel events.
§ BACKGROUND
§.§ Outflow channels
§.§.§ Description
Outflow channels are long (up to ∼ 2000 km) and wide (up to ∼ 100 km) valleys
that were sculpted by large-scale debris-laden water flows <cit.>.
The most prominent martian outflow channels are located in the circum-Chryse area and
debouch in the Chryse Planitia itself <cit.>.
Several processes have been suggested to have caused such outburst floods <cit.>.
It is likely that the water that was released during these
events come from subsurface aquifers <cit.>.
In this scenario, the temperature of the extracted
subsurface water is controlled by the geothermal gradient and thus
would depend on its initial depth of origin.
During the Late Hesperian, when outflow
channel events largely occured, this gradient could have been locally higher <cit.>,
because the circum-Chryse area is close to the volcanically active Tharsis region <cit.>.
Therefore, the discharged water could have reached the surface at a
maximum temperature of tens of degrees above the freezing point <cit.>.
We note that the run-away decomposition of CO_2
clathrate hydrate <cit.>,
proposed as a possible mechanism for the origin of the outflow water,
cannot produce water temperature greater than 10K above the freezing point.
To a first approximation, and from a climatic point of view,
the only difference between these two processes of liquid water discharge
is the temperature of the water. Thus, we considered in this paper various cases ranging from 280 Kelvins to 320 Kelvins (see section <ref>).
Whatever the physical process operating,
large amounts of water released at very high rates are needed at the origin of the water flow
in order to explain the erosion of the circum-Chryse outflow channels.
The quantity of water estimated to erode all the
Chryse basin channels is ∼ 6×10^6km^3
assuming 40% by volume of sediment <cit.> but could
possibly be much more if one assume lower sediment loads <cit.>, which is,
for example the case on Earth (∼0.1% of sediment by volume).
The different estimates of outflow channel single-event volumes,
discharge rates and durations lead to a wide range of results, but
two endmember scenarios can be defined and explored.
On the one hand, some researchers estimated that only a limited number of very intense
(volume up to 3×10^6 km^3, discharge rates up to 10^9 m^3 s^-1)
outflow channel formation events
actually occured <cit.>.
On the other hand, more recently, other researchers argued that outflow channels were formed
by numerous individual small events <cit.>.
This latter work implies water volumes from hundreds to thousands of km^3,
discharge rates of 10^6-10^7 m^3 s^-1 for individual
events and a minimum period between successive single events of ∼ 20 martian years.
These endmember estimates differ by several orders of magnitude, but in this paper, we explored the full range.
§.§.§ Fate of the outflow channel liquid water flow
In this section, we provide a description of the possible fate, and
calculations of the possible velocities, of the outflow channel water;
these will serve as input for the description of the liquid water runoff
under various conditions in the GCM simulations.
The ejected liquid water flows from the circum-Chryse
area all inevitably debouch into the basin of Chryse.
However, Chryse Planitia is not a closed basin and if the total
amount of water released in a single event is high enough,
the water will spill into the Northern Plains <cit.>,
flowing down on slopes inclined at ∼0.03^∘ for more than 2000km.
This is an important point because, as the wetted area of the flow increases,
the total rate of evaporation rises.
The fate of the outflow channel liquid water flow can be subdivided into two steps:
1. First, the ground-surface liquid water flows 'inside' the outflow channels. The Reynolds Number Re of such flows is
given by
Re = ρ U_c R_c /μ ,
with U_c the mean water flow velocity in the channel, R_c the hydraulic radius (see below) of the
channel, ρ the density and μ the viscosity of the flow. For most of the outfow channel events, this number must have
been orders of magnitude higher than 500 <cit.>, meaning that the released ground water flows were turbulent.
The most accurate way <cit.> to calculate the mean velocity of such flows is to use the Darcy-Weisbach equation:
U_c = (8 g R_csinα / f_c)^1/2,
with g=g_mars=3.71 m s^-2 the gravity on Mars, α
the slope angle of the channel and f_c a dimensionless friction factor which mostly depends on the bed roughness z_c and the
water depth h of the flow. This factor can be expressed as follows <cit.>:
(8/f_c)^1/2 = a log_10(R_c/z_c) + b,
with a and b two empirical coefficients, which are respectively equal to 5.657 and 6.6303
if the bed roughness z_c (z_c = 10^-2 m here) is fixed <cit.>:
This leads equation (<ref>) to the following equation:
U_c = (g R_csinα)^1/2 (a log_10(R_c/z_c) + b).
The hydraulic radius R_c is defined as the cross-sectional area of the channel divided
by its wetted perimeter:
R_c ∼ (W_c h)/(W_c + 2h),
with W_c the channel width and h the flow depth.
Because outflow channels are wider than deep (W_c ∼ 10-100 km wide but h ≤ 1 km deep), the hydraulic radius R_c can be
replaced by the depth of the water flow h.
To estimate the velocity of the flow according to its discharge rate Q = U_c W_c h, we solve equation (<ref>) using the
Lambert special function W defined by x = W(x e^x). We obtain:
h = (3 ln10/2 a W_c (g sinα)^1/2 Q/W(
3×10^3b/2a ln10/2a z_c^3/2 W_c (g sinα)^1/2 Q ))^2/3
and
U_c = (2a (g sinα)^1/2 W( 3×10^3b/2a ln10/2a z_c^3/2 W_c (g sinα)^1/2 Q )/3 ln10 W_c^1/2)^2/3 Q^1/3.
The high concentrations of sediments
in the flows (up to 40% of the volume) can increase the volumetric
mass density ρ (initially of ρ_water ∼ 1000 kg m^-3) by a factor of 2
and the viscosity μ (initially of μ_water,300K ∼ 8×10^-4 Pa s) by a factor of 16 <cit.>, reducing
by almost 10 the corresponding Reynolds Number. Nonetheless, since both the
sediment load (from 0.1 to 40 %) and the dependence of the friction factor f_c on the Reynolds Number Re,
are poorly known <cit.>, their effects were not taken into account in the flow depth/velocity calculations.
2. As soon as the water flow leaves its channel and reaches Chryse Planitia, the width of the flow strongly increases (up to 2000 km)
and the slope angle decreases down to 0.03^∘. The mean flow velocity and height both decrease (Figure <ref>) whereas the wetted area
increases significantly, leading to even more evaporation. The water will eventually end up in the main topographic depression of Vastitas Borealis
(around -30^∘/70^∘ in longitude/latitude) building up with time.
If the volume of water released by the outflow channel event is higher than ∼ 2.6×10^6 km^3, the
water will spill from the North Polar basin to the Utopia Basin, filling it potentially up to 1.1×10^6 km^3 <cit.>.
If the volume of water exceeds 3.7×10^6 km^3, the two basins become connected.
They can be filled up to few tens of millions of km^3.
Once the flow stops, some water will possibly remain in local topographic depressions such as impact craters or
tectonic basins, thereby contributing to extended evaporation.
If the volume of water or the temperature of the flow are too low, the liquid water flow can potentially freeze before
reaching the lowest points of the northern lowlands. This would likely occur only for the weakest outflow channel events
(low volumes/discharge rates/temperatures), and we do not discuss this possibility further in this work.
§.§ Late Hesperian Climate
Late Hesperian Mars was likely to have been cold and dry globally,
as suggested by the weak occurence of well-developed
valley networks <cit.>,
the absence of observed phyllosilicates within layered deposits
<cit.>,
and the low erosion rates inferred from impact craters morphologies <cit.>.
As suggested by the stability of liquid water, and as supported by using the size distribution of ancient craters
<cit.>, the atmosphere of Mars at the end of the Noachian epoch was likely to have been thicker than the
∼ 8 mbar present day atmosphere.
From the Noachian-Hesperian transition to the Late Hesperian era,
magmatism may have been responsible for the build up of up to 400 mbar of CO_2 in the atmosphere <cit.>.
In fact, it is during the period of formation of the outflow channels that
the release of gaseous CO_2 could have been at its maximum <cit.>:
1. Up to 100 mbars of CO_2 could have been released by the contemporaneous Tharsis volcanism;
2. up to 60 mbars of CO_2 per volume of 10^6 km^3 of outflow waters if produced by clathrate destabilization;
and 3. up to 20 mbars of CO_2 per volume of 10^6 km^3 of outflow waters if coming from
highly pressurized groundwater reservoirs saturated in CO_2.
However, most recent estimates of the several CO_2 loss processes (photochemical escape,
effect of solar wind, sputtering, impact erosion, loss to carbonates, etc.;
summarized in <cit.>) suggest that, in spite of the previously mentioned high estimates of
CO_2 outgassing amounts, it is very unlikely that the atmosphere of Late Hesperian Mars
was thicker than 1 bar. In other words, there are currently no known physical/chemical processes
that could accommodate the loss of an atmosphere at pressures of more than 1 bar.
To summarize, the Late Hesperian atmosphere was probably thicker than 8 mbar and thinner than 1 bar,
but the actual surface pressure is still a matter of debate.
In this paper, we find that the thickness of Late Hesperian Mars atmosphere
plays an important role in relation to the climatic impact of outflow channel formation events.
We chose to explore a wide possibility of atmospheric surface pressures, ranging from 40 mbar to 1 bar.
§ MODEL DESCRIPTION
§.§ The Late Hesperian Global Climate Model
In this paper we use the 3-Dimensions LMD Generic Global Climate Model,
specifically developed for the study of the climate of ancient Mars <cit.>,
and adapted here for the study of the influence of outflow channel events on Mars climate during the Late Hesperian.
This model is originally derived from the LMDz 3-dimensional
Earth Global Climate Model <cit.>,
which solves the basic equations of geophysical fluid dynamics using
a finite difference dynamical core on an Arakawa C grid.
The same model has been used to study many
different planetary atmospheres including Archean Earth <cit.>, a highly irradiated 'future' Earth <cit.>,
Pluto <cit.>, Saturn <cit.> and exoplanets <cit.>.
Most of the simulations presented in this paper were performed at
a spatial resolution of 96 × 48 (e.g. 3.75^∘ × 3.75^∘; at the equator,
this gives in average 220 km × 220 km) in longitude / latitude.
This corresponds approximately to twice the horizontal resolution used
and eight times the calculation time needed in the work done by <cit.>
and <cit.>. For this reason, a parallelized version of the
GCM was used to deal with the long computation times.
We explored the influence of the horizontal resolution (up to 1^∘ x 1^∘ / 360 × 180
grid in longitude / latitude) and did not find any significant discrepency compared with the 96 × 48 lower resolution simulations.
In the vertical direction, the model is composed of 15 distinct atmospheric layers,
generally covering altitudes from the surface to ∼ 50 km.
Hybrid σ coordinates (where σ is the ratio between
pressure and surface pressure) and fixed pressure levels were
used in the lower and the upper atmosphere, respectively.
The lowest atmospheric mid-layers are located
around [18, 40, 100, 230, ..] meters and the highest at about
[.., 20, 25, 35, 45] kilometers.
We used the present-day MOLA (Mars Orbiter Laser Altimeter) Mars surface topography <cit.>,
and we considered that most of the Tharsis volcanic load was largely in place by the end of the Hesperian epoch <cit.>.
We set the obliquity of Mars at 45^∘ to be consistent with both the most likely obliquity
(41.8^∘) for ancient Mars calculated by <cit.> and
one of the reference obliquities (45^∘) used in <cit.>.
The sensitivity of obliquity (and more generally of the seasonal effects) is discussed in section <ref>.
To account for the thermal conduction in the subsurface,
we use an 18-layer thermal diffusion soil model that originally derives from
and has been modified to take into account soil layers with various conductivities. The mid-layer depths
range from d_0 ∼ 0.1 mm to d_17 ∼ 18 m, following the power law
d_n = d_0 × 2^n with n being the corresponding soil level, chosen to take
into account both the diurnal and seasonal thermal waves.
We assumed the thermal inertia of the Late Hesperian martian regolith to be
constant over the entire planet and equal to 250 J m^-2 s^-1/2 K^-1.
This is slightly higher than the current Mars global mean thermal inertia in order to account
for the higher atmospheric pressure.
Subgrid-scale dynamical processes (turbulent mixing
and convection) were parameterized as in <cit.> and <cit.>.
The planetary boundary layer was accounted for by the <cit.> and <cit.>
time-dependent 2.5-level closure scheme, and
complemented by a ‘‘convective adjustment’’ which rapidly mixes
the atmosphere in the case of unstable temperature profiles (see section <ref> for more details).
In the simulations that include outflow channel events, the dynamical time step is ∼ 45 seconds
(respectively ∼ 184 s for the control simulations).
The radiative transfer and the physical parameterizations
are calculated every ∼ 15 minutes and ∼ 4 minutes
(respectively every ∼ 1 hour and ∼ 15 minutes for the control simulations).
§.§.§ Radiative Transfer in a CO_2/H_2O mixed atmosphere.
The GCM includes a generalized radiative
transfer for a variable gaseous atmospheric composition made of a mix of CO_2
and H_2O (HITRAN 2012 database, <cit.>)
using the 'correlated-k' method <cit.>)
suited for fast calculation.
For this, we decomposed the atmospheric Temperatures / Pressures / Water Vapor Mixing Ratio
into the following respective 7 x 8 x 8 grid:
Temperatures = {100,150, .. ,350,400} K;
Pressures = {10^-6,10^-5, .. ,1,10} bar;
H_2O Mixing Ratio = {10^-7,10^-6, .. ,10^-2,10^-1,1 }
mol of H_2O / mol of air (H_2O+CO_2 here).
CO_2 collision-induced absorptions <cit.> ) were included in our calculations as in <cit.>,
as well as the H_2O continuums. For this, we used the CKD model <cit.> with H_2O lines truncated at 25 cm^-1.
For the computation, we used 32 spectral bands in the thermal infrared and 35 in the visible domain.
16 non-regularly spaced grid points were used for the g-space integration, where g is the cumulative
distribution function of the absorption data for each band.
We used a two-stream scheme <cit.> to take into account the radiative effects
of aerosols (CO_2 ice and H_2O clouds) and the Rayleigh scattering (mostly by CO_2 molecules),
using the method of <cit.>.
In summary, compared to the radiative transfer calculation used in <cit.>,
we utilized here a more recent spectroscopic database (HITRAN2012 instead of HITRAN2008) and built
new correlated-k coefficients suited for wet atmospheres (water vapor VMR up to 100%). In practice, the maximum water
vapor Mass Mixing Ratio that was reached in our simulations (in the case of low surface pressure simulations) was ∼ 20%.
In addition, we chose a mean solar flux of 465 W.m^-2
(79% of the present-day value of Mars; 35% of Earth's present-day value;
and 105% of the flux used in the <cit.> work), corresponding to the reduced luminosity
from standard solar evolution models <cit.> 3.0 Byrs ago, during
the Late Hesperian era. During this epoch, the Sun was also 1.5 % cooler <cit.>;
we did not, however, include in our model the resulting shift in the solar spectrum.
It is worth nothing anyway that absolute ages are based here on crater counting and are therefore not well constrained.
For instance, the valley networks observed in West Echus Chasma Plateau are 2.9 to 3.4 billion years old <cit.>.
§.§.§ CO_2 and Water cycles
Both CO_2 and H_2O cycles are included in the GCM used in this work.
1. Carbon Dioxide is here the dominant gaseous species. In our model,
CO_2 can condense to form CO_2 ice clouds and surface frost if the temperature
drops below the saturation temperature. Atmospheric CO_2 ice particles are sedimented
and thus can accumulate at the surface. The CO_2 ice layer
formed at the surface can sublimate and recycle the CO_2 in the atmosphere.
The CO_2 ice on the surface contributes to the surface albedo calculation:
if the CO_2 ice layer overpasses a threshold value of 1 mm thickness,
then the local surface albedo is set immediately to the albedo of CO_2 ice (0.5 in this work).
2. A self-consistent H_2O water cycle is also included in the GCM.
In the atmosphere, water vapor can condense into
liquid water droplets or water ice particles, depending
on the atmospheric temperature and pressure, forming clouds.
At the surface, because the range of surface pressures modeled in this work are well above the
triple point 6 mbar pressure, liquid water and water ice can coexist.
Their contributions are both taken into account in the albedo calculation as in <cit.>.
The stability of liquid water / ice / CO_2 ice at the surface is governed by the balance
between radiative and sensible heat fluxes (direct solar insolation, thermal
radiation from the surface and the atmosphere, turbulent fluxes) and thermal
conduction in the soil. Melting, freezing, condensation, evaporation, sublimation and
precipitation physical processes are all included in the model.
§.§.§ Convective Adjustment
Outflow channel events result in the emplacement of warm liquid water,
which leads to the sudden and intense warming of the atmosphere.
Global Climate Models (∼ 200 km grid size for our simulations) are not suited
to resolve the convection processes as is done
in the case of mesoscale models, which have a typical km-size resolution <cit.>.
Moist convection was taken into account following a moist convective adjustment that originally derives from the 'Manabe scheme' <cit.>.
In our scheme, relative humidity is let free and limited to 100%, since it is
inappropriate here to use an empirical value for relative humidity (versus altitude)
that comes from Earth observations, as proposed in the original scheme.
This scheme has been chosen instead of more refined ones because it is: 1. robust for a wide range of pressures;
2. energy-conservative; and 3. it is the most physically consistent scheme for exotic
(non Earth-like) situations such as the ones induced by outflow channel events.
In practice, when an atmospheric grid cell reaches 100% saturation and the corresponding atmospheric column has
an unstable temperature vertical profile, the moist convective adjustment scheme is performed to get a stable moist-adiabatic lapse rate.
In our simulations, after major outflow channel events, large amounts of water vapor
can be released into the atmosphere and the water vapor can easily become a dominant atmospheric species.
In fact we recorded up to 20% water vapor Mass Mixing Ratios
following intense outflow channels (in the case of low surface pressure).
Thus, we used a generalized formulation of the moist-adiabiat lapse rate developed by <cit.> (Supplementary Materials)
to account for the fact that water vapor can become a main species in our simulations.
In our model we also used the numerical scheme proposed by <cit.> to account for atmospheric
mass change after the condensation or the evaporation of gases (water vapor in our case);
this calculation is usually neglected in most of the well-known Global Climate Models.
More details on the scheme can be found in <cit.> (Supplementary Materials).
This scheme comes from previous work for the CO_2 cycle on present-day Mars <cit.>, where there is some observational validation.
§.§.§ Parameterization of the precipitation events
H_2O precipitation events were parameterized using a simple cloud water content threshold scheme <cit.> as
in <cit.>. If the cloud water content overpasses a threshold l_0 in a given atmospheric grid cell,
precipitation occurs. We chose l_0 to be constant and equal to 0.001 kg/kg as in <cit.>.
<cit.> examined the influence of l_0 and found it to be very low
(1K difference between l_0=0.001 and 0.01 kg/kg).
We note that the reevaporation of the precipitation is also taken into account in our numerical scheme.
§.§ Control Simulations without outflow events
We performed control simulations in the conditions described above for 5 different surface pressures
(40 mbar, 80 mbar, 0.2 bar, 0.5 bar, 1 bar) and we obtained results which are consistent with <cit.> and <cit.>.
For these control runs, the three main differences between our work and <cit.> were: 1. the updated absorption coefficients
(now HITRAN 2012); 2. an increase of the solar luminosity (now 79% of Mars present-day value); and 3. the increase of the horizontal
model resolution (from 32 x 32 to 96 x 48 in longitude x latitude).
Figure <ref> shows the mean annual surface temperatures and the position of the stable ice deposits for the
reference case (0.2 bar) and the two surface pressure endmembers (40 mbar and 1 bar).
The mean annual surface temperatures are slightly lower than in Figure 3 in <cit.>
which were obtained for a fixed 100% relative humidity. It is also perhaps due to a slightly reduced CO_2 ice cloud
warming effect at high spatial resolution.
The stable surface ice deposit locations were calculated using the ice equilibration algorithm of <cit.>.
Starting from a random initial surface ice distribution, (1) we run the GCM for two martian years then
(2) we extrapolate the ice layer field h_ice evolution calculation using:
h_ice(t+n_years)=h_ice(t)+n_years × Δ h_ice,
with Δ h_ice the annual mean ice field change of the one-martian-year previous simulation and
n_years the number of years requested for the extrapolation.
Then, (3) we eliminate of seasonal ice deposit and (4) we normalize the extrapolated ice field by the initial ice
inventory to conserve the total ice mass.
Eventually, (5) we repeat the process.
This algorithm has been shown <cit.> to be insensitive to the proposed
initial ice field location at the beginning of the simulation, at least assuming that the scheme
has been repeated a sufficient number of times.
In total, for our control simulations, we performed the scheme 30 times, with n_years=100 for the first 5 loops and
n_years=10 for 20 more loops for a resolution of 32 x 32. Then, we ran the algorithm 5 more times at the increased resolution
of 96 x 48 to obtain a stable initial state necessary for the implementation of outflow channel events.
We note that 3D climate modeling under conditions similar to those described above <cit.>
have not yet been able to produce liquid water or at least significant precipitation by climatic processes
anywhere on the planet, even when maximizing the greenhouse effect of CO_2 ice clouds.
§.§ Experiment - Modeling of Outflow Channel Events
§.§.§ Description of the parameterization
Outflow channel events can be modeled to a first approximation by the sudden release,
and then the spread of warm liquid water over the surface of Mars.
In our simulations, this was accomplished by the emplacement of a fully mixed layer of warm liquid water at the surface.
The fate of this water depends on the following processes (summarized in Figure <ref>).:
1. The liquid water layer loses some energy by thermal conduction to the initially cold ground.
For this, we fix the uppermost of the 18th martian regolith layers at the temperature of the water, and calculate the
heat flux lost (or gained) by the warm water to the downward layers.
2. The warm liquid water layer cools by emitting thermal infrared radiation at σ T_surf^4.
This emission contributes to the radiative transfer budget.
3. The liquid water evaporates and looses some latent heat. The evaporation E at the location of the warm water was computed
within the boundary-layer scheme, using the following bulk aerodynamic formula:
E = ρ_1 C_d V_1 [q_sat(T_surf)-q_1],
where ρ_1 and V_1 are the volumetric mass of air and the wind velocity at the first atmospheric level,
q_sat(T_surf) is the water vapor mass mixing ratio at saturation at the surface,
and q_1 is the mixing ratio in the first atmospheric layer. The aerodynamic coefficient is given by
C_d = (κ / ln(1+z_1 / z_0))^2 ∼ 2.5×10^-3, where κ = 0.4 is the Von Karman constant, z_0 is the roughness coefficient and
z_1 is the altitude of the first level (∼ 18 meters).
We modeled the sensible heat exchanged between the surface and the first atmospheric layer using a similar formula:
F_sensible = ρ_1 C_p C_d V_1 [T_surf-T_1],
with T_1 the temperature of the first atmospheric level and C_p the mass heat capacity assumed equal to 850 J K^-1 kg^-1
in case of a CO_2-only atmospheric composition.
4. Depending on the volume of water modeled, liquid water will flow
from the Circum-Chryse outflow channel sources to Chryse Planitia, then
to Acidalia Planitia, and eventually to the Northern Plains.
First, we modeled the displacement of the flow calculated from its height and its velocity. The velocity of the flow mostly
depends on its width but also on the slope of the terrain. For each grid, we used the subgrid mean slope and
the subgrid mean orientation of the slope to evaluate (using equations (<ref>) and (<ref>))
the velocity and the direction of the flow. Second, we used
a simple bucket scheme to model the progressive filling of the topographic depressions.
Warm waters flowing on the Northern Plain slopes can also encounter H_2O ice (it can be either stable
at a particular latitude, or related to previous outflow channel events, but from the point of view of
latent heat exchange and climate, it does not change anything)
or seasonal CO_2 ice (typically present for atmospheres thinner than 1 bar).
We modeled the interaction of
H_2O and CO_2 ices with warm liquid water using energy conservation. If the liquid water is warm and in a sufficient amount,
all the CO_2 ice sublimates and is added to the atmosphere. Similarly, all the water ice
encountered by the warm flow is melted and converted at the resulting equilibrium temperature.
Once the flow has reached a stable position (e.g. forming a lake), in reality some water may be trapped in local
topographic depressions (impact craters, tectonic basins, ...); it is difficult, however, to estimate adequately how much
water might be sequestered in this manner.
First, the detailed topography of the terrains is unknown prior to resurfacing by the outflow channel events.
Second, the water outflows themselves modified (and probably smoothed) the topography.
Thus, to take into account not only the effect of the trapped water but also the role of the wet ground,
we arbitrarily placed a minimum 20 cm layer of liquid water in all the locations where the liquid water flow passed through.
This assumption may also be representative of the fact that in reality the discharge rate
does not have a rectangular shape (in time) as we assumed in our parameterizations.
5. As time goes on, the liquid water flow cools. If its temperature
reaches the 273.15 K freezing temperature (assuming no salts), the water starts to freeze.
On Earth, salinity drives the freezing point of
oceans to ∼ 271K and assuming similar salt rates in outflow waters would not change much our results.
To account for this process, we developed a multiple layer modified version of the soil thermal conduction model
already included in the GCM. We have in total 100+ layers, with mid-layer depths
ranging from d_0 ∼ 0.1 mm to d_14 ∼ 2 m, following the power law
d_n,n≤14 = d_0 × 2^n with n being the corresponding soil level and
the linear law d_n,n>14=d_14×(n-13) for the deepest layers.
The layers are separated into two parts: the ice cover above and the liquid water below.
For the water ice layers, we use a thermal conductivity of 2.5Wm^-1K^-1
and a volumetric heat capacity of 2×10^6 J m^-3 K^-1.
For the liquid water, we use, respectively,
a thermal inertia of 20000 J m^-2 K^-1 s^-1/2 (artificially high to account for convection)
and a volumetric heat capacity of 4×10^6 J m^-3 K^-1.
At each physical timestep, we estimate the thermal diffusion flux lost by the liquid water layer to the water ice layer and
calculate (using the conservation of energy) the amount of liquid water to freeze. If the depth of the ice - initially going
down to d=d_n - overpasses the layer d_n+1, we convert the n+1 layer into ice.
We note that the use of a multi-layer soil model is important to describe the sea-ice
formation, evolution and its impact on possible cold early martian climates.
Such refined models are better suited to represent the temperature
profile evolution within the ice layer (that may evolve with seasonal forcing or as the ice layer thickens)
and thus the surface temperature that controls the sublimation rate. In particular, our simulations show
that up to 95% of the annual sublimation rate can be produced during the summer seasons.
This requires a good estimate of the seasonal variations of the surface temperature above the ice.
Simultaneously, as the ice layer forms, we also linearly increase the surface albedo from
A_liq = 0.07 (if no ice) to A_ice = 0.5 (if the ice layer thickness h overpasses the threshold
value of h_* = 3.5 cm; ) as follows:
A = A_liq + (A_ice - A_liq) h/h_*.
6. The amount of water delivered by outflow events can be very large and thus lead to the accumulation of large quantities of liquid water.
The timing expected for this water to freeze can be evaluated using a combination
of the thermal conduction flux in the ice layer
F = λ_ice (T_surf-T_bottom)/h and the conservation of energy.
Assuming that the temperature in the frozen layer varies linearly between T_bottom = 273.15 K and T_surf (assumed constant)
as hypothesized in classical 2-layers thermodynamical models <cit.>, we have:
ρ_ice (L_m -C_ice (T_bottom-T_surf)/2) ∂ h/∂ t = λ_ice (T_bottom-T_surf)/h,
where ρ_ice is the volumetric mass of the ice (9.2×10^2kgm^-3), C_ice is the specific heat capacity of the ice
(2.1×10^3Jkg^-1K^-1), λ_ice is the conductivity of the ice (2.5Wm^-1K^-1) and
L_m ∼ 3.34×10^5 J kg^-1 is the latent heat of fusion of water ice.
This leads after integration over time to an expression of t(h), the timing required to freeze a layer of depth h:
t(h) = ρ_ice/2 λ_ice (L_m/(T_bottom-T_surf)-C_ice/2) h^2.
For example, the outflow event presented in section <ref> leads to the accumulation of up to 600 meters of liquid water.
A typical timescale (for T_surf ∼ 200 K) for this water to freeze, according to equation <ref>,
is ∼ 4 × 10^3 martian years.
To account for such long timescales, we developed a modified version of the ice iteration scheme presented above.
(1) First, we run the GCM for a few years then (2) every 2 years, we extrapolate the amount of ice that has locally
condensed and sublimed in the simulations by
an arbitrary factor n_years. Simultaneously, (3) we proceed to a linear extrapolation of the amount of frozen water/of the
growth of the ice layer thickness by the same factor n_years, using the conservation of energy.
We actually fit the t = f(h) function by straight lines of sizes multiple of n_years.
In the reference simulation presented in section <ref>, we
performed first 5 martian years, then we extrapolated every 2 years using n_years=[5,5,20,20,50,50,100,100,500,500].
After the extrapolation of the ice field/the ice layer depth is completed,
(4) we arbitrarily set the ground temperature profile (where liquid water remains)
to be linear, between T_bottom = 273.15 K and T_surf calculated using
the conservation of energy. This is a way to take into account (at first order) the evolution
of the deepest ground layers that require very long timescales
to stabilize their temperature profiles. The year following the extrapolation is thus
also useful to get back a consistent temperature
profile in the first layers (up to 15 meters typically).
7. Once the outflow water is completely frozen, we use again the ice iteration scheme (see section <ref>)
to get estimates of the timing required for the ice to reach its stable positions.
§ RESULTS - THE REFERENCE SIMULATION
We present in this section the results of simulations of outflow channel formation events occuring in the largest
of the Circum Chryse channels: Kasei Vallis.
We chose this particular location because 1. The Kasei Vallis outflow channel begins in Echus Chasma,
which is close to the West Echus Chasma Plateau valley networks; and 2. Kasei Vallis is one of the largest
outflow channels on Mars <cit.>.
We focus first on a discharge of 10^6 km^3 (6.9 meters of GEL - Global Equivalent Layer)
of liquid water heated at 300 Kelvins.
Water is released at a constant rate of 10^9 m^3 s^-1 in the region of Echus Chasma
(see Figure <ref> for the associated flow).
This event is an upper estimate (in volume, discharge rate and temperatures) of
the characteristics of outflow channel formation events (see section <ref> for references).
As explained in section <ref>, surface atmospheric pressure in the Late Hesperian epoch is poorly constrained.
Thus, we focus first on the case of a surface pressure of 0.2 bar.
§.§ Description of the flow
A volume of 10^6 km^3 of liquid water is released at the discharge rate of 1 km^3 s^-1.
It takes approximately 1.1 martian days for the liquid water to travel from the source of the flow
(in Echus Chasma, at ∼ 4^∘N,-79^∘E) to the end of Kasei Vallis
(at ∼ 30^∘N,-45^∘E), and 1.5 more days for the same flow to reach the
main topographic depression of the northern plains (at ∼ 70^∘N,-30^∘E).
This corresponds, respectively, to mean flow speeds of ∼ 30 m s^-1 and
∼ 16 m s^-1, which are consistent with the two endmembers values shown in Figure <ref>.
After ∼ 11 days, the source of ground water (located in Echus Chasma) becomes inactive. Eventually, it takes approximately 20 martian days
in this scenario for the liquid water that has erupted in Echus Chasma to form a stable lake in the
lowest part of the Northern Plains. This lake extends over an area of 4.2 millions of km^2
(∼ 2.9% of the global surface area of Mars), has a mean depth of ∼ 240 meters and a peak depth of ∼ 600 meters.
Some water (∼ 20 centimeters) is left at locations with latitude < 50^∘N to
account for the wet ground and the water possibly trapped in the topographic depressions.
The fate of the outflow channel formation event can be divided into two main parts:
1. During the first ∼ 500 days following the
event, the 'Warm Phase', an intense hydrological cycle takes place.
The end of this phase approximately coincides with the time when the Northern Plains lake becomes fully covered by an ice layer.
2. During the following ∼ 10^5 martian years, the martian climate is controlled by a weak and cold water cycle.
It takes approximately the first 4 × 10^3 years (as predicted by simple energy-balanced models; )
for the lake to be entirely frozen,
and the rest to sublimate the lake completely and move the ice to its positions of equilibrium,
assuming no ice gets buried below a lag deposit or gets transported through glacier flows.
§.§ The Warm Phase
As soon as the simulation starts, the warm 300 K liquid water
released in Echus Chasma evaporates efficiently following equation <ref>, while flowing
over the Northern Plains slopes.
At the locations reached by the flow, which represent ∼ 11 million km^2 (∼ 7.5% of the global surface area of Mars),
the evaporation rate can reach ∼ 10^-3 kg m^2 s^-1 for tens of days. Figure <ref> (left) shows
the mean evaporation rate for the 4.2×10^6 km^2 Northern Plains stable lake formed by the outflow channel flood accumulation.
During the 500 days following the event,
a global precipitable water amount of ∼ 23 centimers is evaporated by the liquid water flow.
Evaporation of the lake accounts for 96 % of this amount (blue region in Figure <ref>, after 480 hours)
and 4 % by the evaporation of the transient flow (grey region in Figure <ref>, after 480 hours).
This amount of cumulative evaporation corresponds to ∼ 3.4 % of the initial volume of water ejected
by the ouflow event, which is approximately 0.7 times the amount of evaporated water
that would be expected if the extra thermal heat (compared to 273 K) of the 300 K flow was
simply converted into latent heat.
§.§.§ Mechanisms warming the atmosphere
As the water vapor starts to accumulate above the flow, the initially cold martian lower atmosphere
soon reaches the water vapor saturation pressure. For instance, at 210 Kelvins, which is typically the
mean surface temperature expected for a 0.2 bar atmosphere (Figure <ref>), the water vapor
saturation pressure is ∼ 1.4 Pascals and the mass mixing ratio at saturation
in a 0.2 bar atmosphere is thereby ∼ 7×10^-5 kg/kg^-1.
This situation leads to the early condensation of the water vapor, latent heat release and thus to the warming of the atmosphere.
We identified this process as the dominant mechanism responsible for the warming of the atmosphere after an outflow event.
As the atmospheric temperatures increase, the capability of the atmosphere to retain water vapor also increases.
The mass mixing ratio at saturation, namely Q_sat, can be written as follows:
Q_sat,H_2O = P_sat,H_2O/P_CO_2+P_sat,H_2O,
with P_sat,H_2O(T)=P_ref e^L_vM_,H_2O/R(1/T_ref-1/T),
with P_sat,H_2O the water vapor saturation pressure and
P_CO_2 the CO_2 partial pressure,
with P_ref and T_ref the pression/temperature of the triple point of water,
respectively equal to 612 Pascals/273.16 Kelvins,
M_,H_2O ∼ 1.8×10^-2 kg mol^-1 the molar mass of water,
and L_v ∼ 2.26×10^6 J kg^-1 the latent heat of vaporization of liquid water.
For low amounts of water, this relation simply becomes:
Q_sat,H_2O(T) ∼P_ref/P_CO_2 e^L_vM_H_2O/R(1/T_ref-1/T).
Therefore, as the atmospheric temperatures increase, the atmosphere is also able to transport more and more water upwards.
Thus, as time goes on, the atmosphere becomes more and more warm and wet. As the atmospheric water vapor content increases,
the absorption of the atmosphere in the infrared wavelength range (essentially due to the thermal emission of the warm outflow waters)
increases and thus contributes to an additional warming of the atmosphere.
In total, during the warm phase (the first 500 days), the atmosphere (above the flow/lake) is directly
warmed by the following processes (in decreasing order of importance): 1. the condensation of the water vapor produced
by the warm flow (∼ 56 %); 2. the sensible heat exchanged between the flow/lake and the lowest atmospheric layer (∼ 22 %);
3. the thermal infrared emission of the flow absorbed by the mixture of gaseous CO_2/H_2O (∼ 13 %);
and 4. the extra solar absorption resulting from the presence of water vapor excess,
which has strong absorption lines in the solar domain (∼ 9 %);. The atmospheric solar absorption is particularly important in this scenario, because
we chose the outflow channel event to start at Ls = 5^∘ and thus to occur during the northern hemisphere spring and summer.
Of course, all these processes reinforce and strengthen each other.
Figure <ref> shows the spatial evolution of the water vapor atmospheric content.
Initially, water vapor accumulates at low altitudes, in the regions where the liquid water flow is located.
After a few days, the water vapor has reached much higher altitudes (up to ∼ 30 km) through the
aforementioned warming mechanisms and the convective adjustment scheme.
Eventually, once the upper part of the atmosphere has become wet enough (typically after ∼ 10 days in this scenario),
the high altitude horizontal winds (around ∼ 15 km) advect the water vapor into the neighbouring regions.
After ∼ 50 days, all the martian regions located above ∼ 50^∘N
have become more or less wet, with a typical water vapor mean mass mixing ratio of 0.3%.
Similarly, the impact of H_2O condensation (and other additional warming sources) on atmospheric temperatures is
shown in Figure <ref>. After ∼ 100 days, at the peak of the outflow channel event,
the atmospheric temperatures in the lower atmosphere (0-5 km) almost reach 280 K, +90 Kelvins above the regular temperature
(peak above the lake) as calculated in the control simulation;
the atmospheric temperatures in the higher parts of the atmosphere typically extend up
to 230 Kelvins (at 10 km) and to 170 Kelvins (at 25 km), which are respectively +50 K and +25 K above the temperatures prescribed by
the control simulation.
§.§.§ The mechanisms cooling the flow
After ∼ 500 days, which corresponds to the complete surface freezing of the outflow channel event water,
the evaporation E produced by the stable lake (see Figure <ref>)
suddenly reduces (by almost 3 orders of magnitude). To a first order, we have in fact:
E ∝ Q_sat(T) ∝ e^-α/T,
with α=L_subM_H_2O/R and L_sub the latent heat of sublimation of water ice.
The evaporation rate E has thereby a strong dependence on temperature. This is why the drop
in temperature associated with the surface freezing of the Northern Plains lake is responsible for
the sudden decrease of evaporation visible in Figure <ref>
(also seen through the latent heat surface flux in Figure <ref>).
This drop in evaporation defines the end of the 'warm phase', which includes the decrease of the water vapor content,
the atmospheric temperatures and the precipitation events (see Figure <ref>).
There are several physical processes that are responsible for the cooling of the flow, leading to its solidification as ice.
Figure <ref> shows the relative importance of the different thermal heat losses by the Northern Plains lake,
from the beginning of the event to one martian year later.
For the first 500 days, the main cooling surface fluxes are the latent heat loss (420 W m^-2, 43.3 %),
the sensible heat loss (190 W m^-2, 19.6 %),
the radiative thermal emission loss (280 W m^-2, 28.8 %)
and the ground conduction loss (8 W m^-2, 0.8 %).
Some other surface fluxes related to the CO_2 ice sublimation by the warm flow (13 W m^-2, 1.3 %) and
the cooling of the lake by the melting of the falling snow (60 W m^-2, 6.2 %) also contribute to the cooling of the outflow waters.
In total, the average cooling flux of the outflow waters for the warm phase (first 500 days) is ∼ 970 W m^-2.
For large outflow channel formation events like the one described in this section,
the sublimation of the seasonal carbon dioxide ice deposit represents a small fraction of the heat loss. Nonetheless,
smaller outflow channel events (5× 10^3 km^3 for example <cit.>) flowing on the Northern Plains slopes may be deeply affected by the energy gap
required to sublimate the CO_2 ice seasonal deposit. For a 0.2 bar atmosphere, the control simulations show, for example, that the CO_2 ice
seasonal deposit reaches a yearly average of ∼ 300 kg m^-2 from the North Pole down to 30^∘N latitudes.
Two radiative processes may counteract the cooling of the flow: 1) the absorption of solar radiation and 2) the greenhouse effects
(of the atmosphere and of the clouds).
1. We chose in this scenario to start the outflow channel event at Ls = 5^∘ in order to maximize the role of solar absorption.
The peak of the event (between ∼0-300 days, Ls ∼ 5-165^∘) was therefore chosen to overlap
with the peak of insolation in the Northern hemisphere, which is a maximum of ∼ 170 days after the event (Ls = 90^∘).
There are three factors that need to be taken into account in the solar absorption processes: absorption by water vapor,
albedo changes and clouds. For this reference simulation, compared to the control simulation, these three effects more or less compensate at the location
of the flow. The increase of the solar absorption due to the low albedo of liquid water (0.07 compared to 0.2 for the bare ground and
0.5 for the remaining CO_2 ice seasonal cover) and due to the absorption by water vapor are more or less balanced by the
reflection of the cloud cover, which can reach on average a coverage of 80 % during the first 500 days above the lake (Figure <ref>).
Most of these water clouds are located at low altitude (Figure <ref>).
During the warm phase, the lake absorbs a solar flux of ∼ 67 W m^-2 (∼ 16 W m^-2 less than the control run,
see Figure <ref>) and the atmosphere
(essentially the troposphere) ∼ 20 W m^-2 (∼ 12 W m^-2 more than the control run).
This corresponds to an average absorption of 65 % of the available incoming solar flux (∼ 135.6 W m^-2 for the first 500 days).
2. The downward thermal infrared emission from the atmosphere and the clouds is
the dominant warming flux (see Figure <ref>). On average, during the warm phase,
this greenhouse effect brings ∼ 210 W m^-2 to the lake (+ 150 W m^-2 more than the control run).
The main source of thermal infrared emission surface heating comes from the gaseous atmosphere itself,
which can reach up to ∼ 280 K (above the lake) for the first 5 km, at the peak of the event.
In total, both solar and infrared heating counterbalance only ∼ 30% of the cooling of the flow, and are thus
unable to sustain the perturbation generated by the ouflow channel.
We note here that the radiative effect of H_2O clouds during the warm phase
is approximately neutral or at least very limited (only +7 W m^-2) above the Northern Plains lake,
with +23 W m^-2 of greenhouse warming and -17 W m^-2 due to the reflection of the sunlight.
§.§.§ The mechanisms cooling the atmosphere
One of the main results of our work is that outflow channel events
are not able to sustain warm conditions. We present here the two processes
that act efficiently together to cool down the atmosphere after outflow events.
1. In the time following catastrophic outflow channel events like the one described in this section,
the atmosphere above the flow warms very quickly. In our reference simulation, 10 days after
the beginning of the event, the temperature in the lower atmosphere (0-5 km) above the lake
increases by almost 90 Kelvins. During the first 500 days after the event, because of this significant warming,
the flow and the atmosphere just above it contribute
to an extra thermal infrared emission loss to space of 38 W m^-2 compared to the control simulation.
Yet the amount of energy lost by the lake and the atmosphere above represent only ∼ 11% of the extra total cooling to space.
Figure <ref> shows that, as the atmosphere gets warmer in the regions of the flow, high altitude winds around
∼ 15 km advect the heat to the neighbouring areas (in particular into the Northern Plains).
This increases the surface of the emissions and therefore strengthens the cooling.
Figure <ref> (left) shows the regions of the planet responsible
for the extra thermal emission to space. Globally, during the warm phase (the first 500 days),
the planet loses ∼ 10 W m^-2. One third of the emissions are due to the regions of latitude >50^∘N.
During the warm phase, the most important mechanism of cooling is the thermal infrared emission,
enhanced by the advection processes.
2. Interestingly, another important cooling mechanism is the decrease of solar absorption due to the increase of surface albedo that
follows the outflow channel event. In fact, the precipitation caused by the event,
essentially in the form of snowfall (see Figure <ref>), leaves ice (see Figures <ref> and <ref>)
over an area of ∼ 30 × 10^6 km^2 that reflect an important part of the sunlight (∼ 21.5 W m^-2).
In total, during the warm phase and compared to the control simulation,
the decrease of solar absorption contributes to a global equivalent extra cooling of ∼ 4.5 W m^-2, which represents
half of the infrared emission loss to space.
The large amount of water vapor released after the outflow channel event condenses very quickly in the atmosphere, forming clouds that are
mostly located in the area of the flow and of the resulting lake (see Figure <ref>). In total, for the reference simulation,
the clouds have a slight positive effect of +1.3 W m^-2 (+ 2.3 W m^-2 of greenhouse effect and
- 1.0 W m^-2 of solar reflection).
§.§.§ Consequences on the water cycle and the precipitation
The maximum total amount of water vapor that is carried by the atmosphere during the event (GEL of 1.2 mm at the peak)
remains limited by comparison to the cumulative total amount of precipitable water generated (GEL of 230 mm).
It represents only ∼ 0.5% of the cumulative evaporated water vapor
produced by the entire ouflow channel event during the first 500 days.
Figure <ref> (left) shows the global mean atmospheric
water vapor content (column mass in kg m^-2, and also GEL in mm). It peaks at ∼ 100 days and
considerably decreases from ∼ 200 days to ∼500 days.
The fact that the atmosphere is not able to accumulate more than ∼ 1.2 kg m^-2 (globally) and ∼ 50 kg m^-2 (locally,
just above the warm lake) has one main consequence: the atmosphere does not manage to carry enough water vapor far enough from the lake to create
precipitation in regions of interest (West Echus Chasma Plateau in particular).
The typical lifetime of the atmospheric water vapor is in fact ∼ 0.5 days
Rainfall, which represents a very small fraction (∼ 10 %) of the precipitation (Figure <ref>), occurs only above the
Northern Plain lake, because this is the only location of Mars where atmospheric temperatures exceed (up to 10 km) the temperature of the triple point.
Outside the lake, the only mechanism of precipitation is snowfall. Approximately 50 % of the snow falls back directly on the flow/lake.
The rest of the precipitation (the 50 % remaining) is essentially confined in the northern regions.
Figure <ref> shows the map of the deposited ice field (generated by precipitation) after a simulation of one martian year.
The fraction of this ice that is melted after an outflow event is very limited (see Figure <ref>),
because 1) most of the thermal perturbation has been dissipated by
advection/cooling to space processes after ∼ 200 days,
2) the remaining water vapour abundance after these 200 days is too low to trigger a significant
greenhouse warming (as found by )
and 3) the ice field itself raises the albedo of the surface and thus acts as a very efficient climatic cooling agent.
In summary, the short-term climatic impact of outflow channel formation events seems very limited.
For a 0.2 bar atmosphere, an outflow channel event of 10^6 km^3/300 K leads to the formation of a lake (located in the
Northern Plains main topographic depression) that triggers a warm period that lasts for
∼ 500 days, which coincides approximately with the complete surface freezing of the water in the lake.
Such events leave globally ∼ 6.5 × 10^3 km^3 of water ice/snow (0.65% of the initial outflow reservoir)
and are able to melt ∼ 80 km^3 (0.008% of the initial reservoir; 1% of the deposited precipitation).
Because the outflow events do not manage to warm the atmosphere enough, water vapour stays confined to the regions neighbouring the lake
(essentially in the Northern Plains) and therefore precipitation (mostly snowfall) and melting only occur in the lowland regions.
The long-term climatic impact of the ice-covered lake is discussed in the next section.
§.§ The Cold Phase
After 500 martian days, the surface of the Northern Plains lake is completely covered by ice.
Temperatures, water vapor content and precipitation all decrease. Because the area of high albedo ice deposits is larger
than in the control simulations, the mean surface temperatures extend even lower than before the ouflow event
(-2 K for the global annual surface temperatures of the 0.2 bar reference simulation, and compared to the control simulation).
Using the extrapolation scheme presented in section <ref>,
we estimated that the released water was completely frozen after ∼ 4 × 10^3 martian years.
This corresponds to the full solidification of the water to ice
at the location of the main Northern Plains topographic depression (which is the deepest point of the lake). After ∼ 500 years,
more than 70 % of the lake (in area) is frozen, from the surface to the top of the regolith. We note that the ground thermal flux
<cit.> during the Late Hesperian era was one order of magnitude too low (at best)
to be able to increase the lifetime of the ∼ 500 m deep lake.
In our simulations, ∼ 10 years after the beginning of the lake-forming event, the mean ice thickness over the lake is ∼ 25 meters.
The annual mean conduction heat flux for this ice thickness is ∼ 10 W m^-2.
The annual mean solar/IR fluxes absorbed by the ice are ∼ 53/57 W m^-2 by comparison,
110 W m^-2 in total. Under these conditions, the thermal conduction flux represents less than 10 % of the total heat flux
received by the surface at the location of the lake.
Moreover, because the temperature profile oscillates annually,
in the first 5 meters (typically) of the ice cover, from positive values (summer season) to negative
values (winter season), the heat conduction from the liquid water to the surface is mainly returned during the winter seasons. Yet, the water cycle in this
cold phase is essentially controlled by the summer seasons, because sublimation rates are several orders of magnitude higher than during the winter seasons
(see section <ref> for discussion).
Thus, after a few years (typically around 10), the climatic effect of the lake becomes, to a first order, the same
as simply placing a comparable-sized body of ice in the Northern Plains. During these 10 years,
ice transportation/water vapor cycle/precipitation is very limited by comparison to the warm phase and do not play
any significant role in the ice field position.
Within the lifetime of the liquid water lake, the ice field position evolution is completely controlled by the ∼ 4 × 10^3 (-10) years
of the water cycle forced by the sublimation of the large body of non-stable ice.
Each year, during Northern summer, ∼ 20 mm year^-1 of lake ice sublimes to condense elsewhere and
approximately 30 % of it is transported away from the lake. Progressively, the water vapor produced during the summers
migrates southward and - through the mechanism of adiabatic cooling - condenses on the regions of high altitudes and low latitudes.
The lifetime of the frozen lake predicted by our simulations is ∼ 7 × 10^4 martian years.
The evolution of the ice field through the phases that follow the outflow channel reference event are shown in Figure <ref>.
After ∼ 10^5 martian years, the outflow channel water is located more or less exclusively in the highland regions.
During this cold phase (∼ 10^5 martian years), some ice appears stable in the region of West Echus Chasma Plateau,
due to the uninterrupted supply of ice coming from the northern parts of the planet.
This snow deposit is produced by the adiabatic cooling of the ascending air masses
that provoke the condensation of the water vapor initially generated by the sublimation of the Northern Plains ice field.
Some water ice is also transported to the drainage regions of Alba Patera, Hecates Tholus and Ceraunius Tholus but
this might not be a critical factor since our model already predicts that ice deposits should be stable
in these regions (, Figure 2 – this work) and therefore available for either seasonal snowmelt or ground melting.
In spite of this, because the global surface albedo is increased during that period, global temperatures are much lower than before the outflow event,
making snowmelt difficult.
We note here that we did not take into account the flow of the ice on the Northern Plains slopes. This could significantly increase the lifetime of the
lake located in the main topographical depression and thus the lifespan of the snow deposited in non-stable locations (in particular in West Echus
Chasma Plateau area).
However, at these temperatures and over these timescales, ice is unlikely to flow significantly <cit.>.
In addition, we did not take into account the formation
of a possible lag deposit <cit.> which could have decreased the sublimation rate of the ice.
Both of these factors, however, appear to have minimal effects on the general processes.
§.§.§ Influence of obliquity
Orbital spin-axis obliquity is a very important factor in the duration and the characteristics of the cold phase, because
it controls the latitudinal distribution of the solar flux and thus the sublimation processes.
We performed two simulations of the reference outflow channel event, at obliquities of 25 ^∘ and 65 ^∘,
to complement the 45 ^∘ obliquity case presented initially.
In the low obliquity simulation, the sublimated ice migrates slowly toward the coldest points of the planet:
the South pole and the North pole (in agreement with <cit.>, Figure 4).
The water present in the northern part of the lake is stable in the long term.
In this situation, ice never accumulates in the region of West Echus Chasma.
In the high obliquity simulation, the water cycle is much more intense
because the peak of insolation at high latitudes is higher.
Approximately ∼ 55 mm of the sublimated northern lake ice migrates southward each year.
The lifetime of the lake is thereby lowered to ∼ 9 × 10^3 martian years.
For the same reasons as that in the reference simulation, a thick ice deposit is present in the region of West Echus Chasma Plateau.
Yet, its duration, ∼ 10^4 years, is almost 10 times less than in the reference simulation,
more or less coincident with the lifetime of its supply (the frozen lake).
As a result, the lifetime of the ice located in West Echus Chasma area seems to be favored at obliquity ∼ 45 ^∘.
§ THE EFFECT OF SURFACE PRESSURE
For many reasons (see discussion in section <ref>), the atmospheric pressure during the Late Hesperian epoch
is not well constrained. We explore in this section the role of surface pressure on the
climatic impact of outflow channels.
For this, we performed five different simulations of the same outflow channel event
(10^6 km^3, 300 K water released at 1 km^3 s^-1 in Echus Chasma)
for five different surface pressures (40 mbar, 80 mbar, 0.2 bar (the reference simulation), 0.5 bar and 1 bar).
§.§ Warm Phase
Atmospheric pressure is one of the key factors that control the efficiency at which the warming of the atmosphere
and the transport of water occur during the warm phase, as pointed out by <cit.>.
1. The evaporation rate: Combining equations <ref> and <ref> for low amounts of water vapor,
the evaporation rate E can be written:
E = C_d V_1 P_ref M_CO_2/R T_1 e^L_vM_H_2O/R(1/T_ref-1/T_surf).
Hence, the evaporation rate does not (directly) depend on the surface pressure and is
mostly controlled by the temperature T_surf of the flow/lake.
To first order (and this is confirmed by our simulations), the wind velocity V_1 and the atmospheric
temperatures T_1 do not differ sufficiently from one atmospheric pressure to
another to play a major role on the rate of evaporation.
2. The warming rate: The volumetric heat capacity of the atmosphere increases linearly with the volumetric mass density and thus the atmospheric pressure.
For example, it takes approximately 1.0/0.040 = 25 × more energy to warm a 1 bar atmosphere than a 40 mbar one.
When the outflow channel event occurs, the rate of warming of the atmosphere (in K/s) is
roughly proportional to the evaporation rate (which is the main source of
heating) and inversely proportional to the volumetric heat capacity of the atmosphere.
In our simulations, it takes ∼ 10/40 martian days - respectively for the 40 mbar/1 bar case -
for the atmospheric temperatures at 10 km to reach a plateau at
250 K/220 K, which correspond to a +80 K/+30 K temperature increase (for initial temperatures equal to 170 K/190 K).
This corresponds approximately to a factor of 10 in heating efficiency for these two endmember situations.
The difference between the factor of 25 predicted and the factor of 10 obtained in our simulations is mostly due to two processes:
advection and thermal emission to space.
The same two processes limit the growth of atmospheric temperatures.
First, the advection tends to dilute the heat perturbation horizontally.
In the 1 bar case, this is the dominant process for example.
Second, the thermal emission to space acts as a very efficient negative feedback.
This is, in fact, the first limiting process in the 40 mbar case.
The capability of an atmosphere to maintain high temperatures from the surface (where evaporation occurs) to the altitude where advection occurs
is in fact the most important factor in the ability to transport water vapor globally and produce precipitation far from the region of evaporation.
The warmer the atmospheric column above the lake is, the more water vapor will be possibly lifted and then transported globally
by the high altitude winds.
Thin atmospheres (such as the 40 mbar) warm efficiently above the region of the flow,
allowing the formation of a persistent water vapor plume that can transport (through advection) water vapor far from the flow/lake.
In contrast, thick atmospheres (such as the 1 bar case) ironically do not manage to transport water efficiently because of the advection itself.
The advection prevents the atmospheric temperatures above the lake from building up and thus the water vapor from accumulating.
This limits the transport of water vapor and favors local precipitation.
This is summarized by Figure <ref> that shows
the radial mean distribution (centered above the Northern Plains lake) of precipitation for
the entire warm phase (first 500 days). Our experiments show that thin atmospheres are able to transport much more water
and for much longer distances than thick ones.
We compare in Figure <ref> the spatial distribution of the precipitation (only snowfall, because rainfall
occurs only above the lake) for the different atmospheric pressures. Whatever the surface pressure considered, the precipitation
stays confined to the Northern Plains.
Another important aspect concerns the role of atmospheric pressure on the ability to melt the ice initially present / transported by the
outflow event itself.
Thin atmospheres, while able to reach temperatures in excess of 273 K above the flow, are not able to raise global temperatures significantly.
First, the relaxation timescale of the temperature field is very low in such atmospheres because of the weak infrared absorption of
the atmosphere.
Second, outflow channel events under thin atmospheres generate a very large ice cover that reflects sunlight efficiently.
As a result, an outflow channel of 10^6 km^3 that occurs under a 40 mbar atmosphere,
leaves globally ∼ 1.5 × 10^4 km^3 of water ice/snow (1.5%) and is able to melt only ∼ 50 km^3 (0.005%).
Thick atmospheres are initially warmer than thin atmospheres (+ 30 K between the 1 bar and 40 mbar atmospheres).
They also have a much more efficient infrared absorption and thus
are better candidates to melt the deposited ice field.
For example, an outflow channel of 10^6 km^3 that occurs under a 1 bar atmosphere,
leaves globally ∼ 4 × 10^3 km^3 of water ice/snow (0.4%) and is able to melt ∼ 110 km^3 (0.011%).
Nonetheless, this melting occurs only in the Northern Plains, in the close vicinity of the lake, because such thick atmospheres do not transport
much ice anywhere on the planet in any case. In addition, ice albedo feedback (which is yet lower for thicker atmospheres)
and the high volumetric heat capacity (lower heat perturbation)
of such atmospheres contribute to lower the possibility of reaching melting temperatures.
Whatever the value of the surface pressure, the ability of the atmosphere to produce liquid water from melting is very limited.
§.§ Cold Phase
The water cycle during the cold phase is, in contrast, more intense for thick atmospheres than for thin ones. The sublimations rates are higher
because global temperatures (and also summer temperatures) are also higher.
At the end of the warm phase, the mean global temperatures for the 40 mbar/1 bar simulations are
respectively ∼ 193 K (3.5 K lower than the control simulation) and ∼ 226 K (1 K lower than the control simulation). This
difference is due to the increased ice cover following the outflow event.
In the 1 bar simulation (thick case), the lifetime of the frozen lake is ∼ 5 × 10^4 martian years, slightly lower than in the
reference simulation. The climatic response during the cold phase behaves more or less in the same manner as in the reference 0.2 bar simulation.
In the 40 mbar simulation (thin case) however, because the water cycle is too weak (sublimation rate of the lake of 2mm/year;
lifetime of the frozen lake ∼ 2 × 10^5 years), the southward flux of the atmospheric water ice
is not high enough to allow the presence of stable ice in the area of the West Echus Chasma Plateau.
More generally, atmospheres with pressure higher than 80 mbar seem necessary to produce ice deposits in the region of West Echus Chasma Plateau.
§ EXTREME PARAMETERIZATIONS
In this section, we study several scenarios that may deeply affect the climatic impact of outflow events:
1. the intensity of the event and 2. the effect of clouds and precipitation.
§.§ Intensity of the event
Because outflow channel events such as the one presented in Section <ref> fail to produce rainfall/transient warming,
it is tempting to explore even more extreme parameterizations of the outflow events.
§.§.§ Temperature of the flow
The temperature of the groundwater released during outflow events is not well constrained (see section <ref>).
Hence, we used the temperature of the flow as a tuning parameter to explore the sensitivity of our results to the intensity of the outflow event.
We performed three simulations of the same outflow event (10^6 km^3, released in Echus Chasma) for three different groundwater temperatures:
280 K, 300 K (reference simulation) and 320 K.
As expected, the warmer the water, the more intense the climatic effect becomes. For example, at the peak of the warm phase,
the 320 K event is able to carry approximately 8 × more water vapor than in the reference simulation because atmospheric warming processes
are amplified by the temperature (evaporation/condensation cycle, IR emission of the flow, ...).
Consequently, 25 % of the precipitation following the 320 K event is
rainfall (respectively 10 %/ 0 % for the reference/280 K simulations). Yet, rainfall still occurs exclusively above the lake (70 %) or
in the northern lowlands of Mars (30 %). Snow precipitation also remains confined to the Northern Plains
down to 15 ^∘N (25/40 ^∘N for the 300 K/280 K simulations).
The amount of water ice transported (Figure <ref>) and melted (Figure <ref>)
after outflow channel events with 280 K/300 K/320 K water shows that
in all cases, the mechanism of advection/cooling to space is very efficient, and as a result, the duration of the warm phase is approximately the same
(∼ 500 days) between the reference and the 320 K simulations.
We note that, at the end of the warm phase, because the amount of ice transported (and the area of the deposit with it)
increases with the initial temperature of the flow, the average surface albedo raises and the mean temperatures decrease:
Warmer flows lead to colder states.
§.§.§ Magnitude of the event: from small outflows to oceans.
Recent work <cit.> has suggested that outflow channels were preferentially carved by multiple events
of reduced sizes (∼ 10^3 km^3) rather than by large (> 10^5 km^3) single outflows.
We performed simulations for different volumes of water at 300 K and released in Echus Chasma at a rate of 1 km^3 s^-1,
from 10^3 km^3 (consistent with the most recent estimations of
outflow volumes) to 10^7 km^3 (ocean case). Figure <ref> shows the final position of the lake as a function of
the initial volume of water. The 10^6 km^3 case is the reference simulation.
Our results show that the large outflows, during the warm phase, transport much more water than the small ones (cumulative).
Small outflows (typically ∼ 10^3-10^4 km^3) have a small wetted area (typically 0.15-0.41 × 10^6 km^3) and
a small initial heat reservoir, so that they cannot warm the atmospheric column above the flow/lake sufficiently to
transport water vapor into the neighbouring regions. Small outflow events inject more or less the same amount of water vapor (in proportion)
than large ones, but they are not able to transport it far from the flow/lake.
For example, 2 × 10^2 events of 5 × 10^3 km^3 transport 2 orders of magnitude less ice outside the flow/lake than
a large 10^6 km^3 one (reference simulation). Moreover, large outflows are able to generate precipitation up to ∼ 5000 km from
the edge of the flow/lake whereas small ones cannot produce any precipitation at a distance
greater than ∼ 400 km (typically the size of 2 GCM grids).
We did not explore in detail the effect of the discharge rate, which has a net impact on the size and duration of the wetted area
(and thus on the evaporation and the albedo), but also on the intensity of the event.
Nonetheless, the climatic response to lower discharge rate events (< 10^9 m^3 s^-1) was found to be lower,
because in such cases the temperatures and the amount of water vapor struggle to build up above the flow/lake.
Because large outflows seem to be much better candidates for generating precipitation globally,
we examined the extreme case of a catastrophic outflow event
of 10^7 km ^3 released simultaneously by all of the circum-Chryse outflow channels (Kasei, Ares, Tiu, Simu Vallis, etc.).
This possibility, sometimes called the MEGAOUTFLO (Mars Episodic Glacial Atmospheric Oceanic Upwelling by Thermotectonic Flood Outburst)
hypothesis <cit.>, speculates that such events could warm Mars during periods of 10^4-10^5 years through a transient greenhouse effect
provoked in part by the injection of large amounts of water vapor.
Our experiments show that such events cannot sustain long-term greenhouse effects,
whatever the size and the temperatures considered for the northern lake/see/ocean.
After 3.5 martian years, for the outflow event described above, the surface of the lake/see/ocean becomes totally frozen.
The thermal infrared emission to space (enhanced by the heat horizontal advection
and by the water vapor advection that release latent heat because of adiabatic cooling;
see Figure <ref> for the detailed mechanism) acts very efficiently to cool the planet.
The ice deposited on the Northern Plains slopes (Figure <ref>) also
enhances the cooling through a depletion of surface solar absorption.
As a result, in such a scenario, rainfall/snowmelt still only occurs in the lowest northern lowlands (see Figure <ref>)
of the planet (far from the region of interests).
In summary, the most intense outflow channel events possible are not able to sustain a global greenhouse warming.
Such events only manage to warm up the atmosphere regionally, in the Northern Plains, and only for a few years at best.
Consequently, rainfall (and snowmelt) occur only in the neighbourhood regions of the final stable lake. After complete
surface freezing of the lake, the climate becomes much colder than initially (due to the increase of the surface albedo),
making the snowmelt even more difficult.
We note that we did not take into account the modification of the topography by the presence of a lake/see/ocean,
which might be a concern for very high volumes of water (≥ 10^7 km^3).
It could reduce significantly the role of adiabatic cooling and thus favor the transport/deposit of water further south.
§ DISCUSSION
§.§ Role of the atmospheric composition.
In this analysis, we made the assumption that the Late Hesperian martian atmosphere was made of 100% CO_2 (and
some water vapor). Outflow channel events under a CO_2 dominated atmosphere
seem not to be able to provoke long-term warming or precipitation at the global scale.
Ouflow channel formation events are very likely related to intense volcanic episodes during martian history <cit.>. During these periods,
it is believed that volcanic gases like SO_2 may have been massively released [see section 1. of <cit.> for more details].
We performed a simulation of an outflow channel event under the same conditions as in section <ref>,
but this time with 1 % of SO_2. Figures <ref> and <ref> show
the corresponding amount of water ice transported/melted after the event.
Small amounts of SO_2 (2 mbar here) are sufficient to raise the global atmospheric
temperatures by several tens of Kelvins and thus to favor the transport
of water vapor/water ice globally and create precipitation far from the Northern Plains stable lake.
However, using the same GCM, <cit.> (and earlier, <cit.>) have shown that massive volcanic SO_2 outgassing cannot lead to a global
and substantial warming, because sulfur aerosols that would form at the same time have a very strong cooling effect,
even in small amounts.
We also believe that, under more realistic parameterizations that would take into account sulfur aerosols (e.g. <cit.>),
the outflow channel climatic impact would be also very limited.
§.§ The role of clouds and precipitation.
The radiative effect of clouds is one of the main sources of uncertainty in GCMs and thus also on the consistency of our results.
In particular, it has been suggested <cit.> that high altitude ('cirrus-like')
water ice clouds may trigger warm climates on Mars even under a faint young sun.
This scenario requires four assumptions:
1) Water ice particles that have sizes > 10 microns;
2) that the rate of precipitation is very low (in order to extend the lifetime of the clouds);
3) When present, clouds need to completely cover a grid cell (no partial cloud cover);
4) Lastly, it also requires an initial 'warm' state, for example an outflow channel event.
To explore in a basic manner the role of clouds and precipitation on the climatic impact of outflow channels, we performed a simulation of the reference
outflow channel event in which we eliminated the precipitation resulting from coalescence (l_0=∞).
For this case, the vertical motion of the ice particles
is governed only by gravitational sedimentation. Figure <ref> shows that the total cloud cover is near 100% over all
the planet during the first year following the event, because of the intense evaporation coupled with the increased lifetime of clouds.
We found that neglecting coalescence and the subsequent precipitation led
to ice deposits that extend much more areally than in the reference case
(Figure <ref>), because the lifetime of ice particles increases substantially.
In such a situation, the global cloud cover (during the year following the event)
has a net positive radiative impact on the global energy balance of + 12 W m^-2
(+ 21.3 W m^-2 of IR warming; - 9.2 W m^-2 of solar absorption).
This is ∼+ 11 W m^-2 higher than in the reference simulation.
However, because the ice field produced by the event extends to a much larger area, the global albedo increases and contributes
approximately 6 W m^-2 of cooling.
Moreover, because of advection processes, this also increases the horizontal extent of the heat perturbation and thereby the global
infrared emission to space. Under clear sky conditions, this would lead to an extra cooling of ∼ 5 W m^-2 compared to the reference simulation.
As a consequence, the total rate of cooling is more or less the same (∼ 15 W m^-2) as that in the reference simulation (l_0=0.001).
The duration of the warm phase is also more or less the same than in the reference simulation (∼ 500 days).
We also note that the seasonal melting of the deposited ice (see Figure <ref>) would be very limited
in such scenarios, because of the increased solar reflection by the clouds.
In addition, because the ice field produced by the event extends over a large region (Figure <ref>),
the planet becomes much colder one year after the event than initially.
Nonetheless, we highly encourage further studies
to explore in more detail the possibility of warming early Mars through water ice clouds (as recently done by <cit.>).
§.§ Conclusions
In this analysis, we explored the climatic impact of a wide range of outflow channel events under many possible conditions.
We find that even considering outflow events with intensity (in volumes and temperatures of water released)
that exceed by far the most recent estimates, the short term climatic response is still very limited.
The duration of the 'warm' phase that follows the outflow events is completely
controlled by the total depth and temperature of the lake that is formed and
is, in practice, no more than few years for the most extreme cases (10^7 km^3 of water warmed at 300 K, e.g. ocean case).
In other words, outflow events fail to trigger greenhouse-sustained warm episodes.
Moreover, the precipitation (almost exclusively snowfall) produced by the events during their warm phase
is limited and confined to the Northern Plains, in the area neighbouring the water outflow.
These results are robust over a wide range of atmospheric pressures and external conditions (e.g. obliquity and season).
We also find that the intensity of outflow channel event effects can be significantly influenced by
the atmospheric pressure which is not well constrained for the Hesperian era.
Thin atmospheres (P < 80 mbar), because of their low volumetric heat capacity, can be warmed efficiently.
This can trigger the formation of a convective plume,
a very efficient mechanism to transport water vapor and ice to the global scale.
Thick atmospheres (P > 0.5 bar) have difficulty in producing precipitation far from the outflow water locations
but they are more suited to generate snowmelt.
Nonetheless, outflow channel formation events are unable,
whatever the atmospheric pressure, to produce rainfall or significant snowmelt at latitudes below 40^∘N.
During the 'cold phase' that follows the solidification to ice of the outflow water,
the body of water ice emplaced in the Northern Plains has a major contribution to the water cycle.
The ice is sublimated seasonally and transported progressively
southward toward the 'Icy Highlands' regions by the processus of adiabatic cooling.
We find that under favorable conditions (obliquity ∼ 45^∘, atmospheric pressure ⩾ 80 mbar),
ice deposits can be stabilized in the West Echus Chasma Plateau area.
For an initial 10^6 km^3 body of water (0.2 bar atmospheric pressure, 45^∘ obliquity),
they can be present during 10^5 martian years.
However, seasonal melting related to solar forcing seems difficult because
1) the West Echus Chasma Plateau is not ideally located, and 2) the presence of (high albedo) snow at the surface has a significant cooling effect.
The global temperatures after outflow events can thus easily be lowered by few Kelvins making the solar melting possibility even more difficult.
Therefore, in this scenario, localized warming such as geothermal activity or meteoritic impacts
would be required to explain the formation of valley networks dated
to the Late Hesperian era and yet observed at this specific location.
apalike
|
http://arxiv.org/abs/1701.07756v1 | 20170126161440 | Dynamic time warping distance for message propagation classification in Twitter | [
"Siwar Jendoubi",
"Arnaud Martin",
"Ludovic Liétard",
"Boutheina Ben Yaghlane",
"Hend Ben Hadji"
] | cs.AI | [
"cs.AI",
"cs.SI",
"stat.ML"
] |
DTW distance for message propagation classification
in Twitter
S. Jendoubi et al.
LARODEC, ISG Tunis, Universit de Tunis IRISA, Universit de
Rennes I LARODEC, IHEC Carthage, Universit de Carthage
Centre d'Etude et de Recherche des Tlcommunications
Dynamic time warping distance for message propagation classification
in Twitter
Siwar Jendoubi1,2,4, Arnaud Martin2, Ludovic Litard2,
Boutheina Ben Yaghlane3, Hend Ben Hadji4
December 30, 2023 [file: ]
====================================================================================================
Social messages classification is a research domain that has attracted
the attention of many researchers in these last years. Indeed, the
social message is different from ordinary text because it has some
special characteristics like its shortness. Then the development of
new approaches for the processing of the social message is now essential
to make its classification more efficient. In this paper, we are mainly interested in the classification
of social messages based on their spreading on online social networks
(OSN). We proposed a new distance metric based on the Dynamic Time Warping distance and we use it with the
probabilistic and the evidential k Nearest Neighbors (k-NN) classifiers to classify propagation networks (PrNets) of messages.
The propagation network is a directed acyclic graph (DAG)
that is used to record propagation traces of the message, the
traversed links and their types.
We tested the proposed metric with the chosen k-NN classifiers on real world propagation traces that
were collected from Twitter social network and we got good classification accuracies.
§ INTRODUCTION
During the past decade, many classification methods have been appeared,
like k Nearest Neighbors (k-NN), Naive Bayes, Support Vector
Machines (SVM), etc. Those methods have been applied to several problems
among them text classification and they proved their performance, <cit.>. However, when working
with short text like online communications, chat messages, tweets,
etc, we are face to a new challenge. In fact, in a short text there
is no sufficient word occurrences or shared context for a good similarity
measure. Let's take Twitter for example, Twitter is a micro-blogging
service that allows its users to share messages of 140 characters
that are called tweets. As a consequence, using a traditional
text classification technique to classify tweets, like the “Bag-Of-Words”
method, fail to achieve good classification rates due to the message shortness.
Existing works on classification of short text integrate meta-information
from external sources like Wikipedia, World Knowledge
and MEDLINE <cit.>. They
tend to enrich the content of the message.
The purpose of this paper is to classify social messages without any
access to their content. Our work is motivated by two facts; first, it
is not always possible to have access to the content of the message
but we may have access to its propagation traces, in such
a case, our approaches are useful. Another motivation is that, text
processing techniques, always, need a pre-processing step in which
it is necessary to remove URLs, stop words, questions, special characters,
etc. When working with tweets, for example, after the pre-processing
step, it falls, very often, on empty messages. Those empty messages
can not be classified by a text based classification technique. Hence
comes the necessity of new classification approaches that consider the propagation of the message.
Our work is driven by the motivations above, and it achieves the following
contributions: 1) we adapted the Dynamic Time Warping (DTW) distance
<cit.> to be used to measure the distance between
two propagation networks (PrNet for short)
[We call propagation network the network that conserves
propagation traces of the message, i.e. traversed links and nodes]. 2) we proposed to incorporate the proposed distance
in the probabilistic k-NN and the evidential k-NN <cit.> to classify propagation
networks of social messages. Then 3) we tested the classifiers
on real world propagation traces collected from Twitter social network.
This paper is organized as follow: Section 2 discusses some related works. Section 3 provides relevant background. Section 4 introduces the proposed PrNet-DTW distance. And in Section 5 presents results from our experiments.
§ RELATED WORKS
§.§ Content based approaches
Methods that are used for text classification or
clustering always have some limitation with short text, in fact, in
short text there is no sufficient word occurrences. Then, traditional
methods are not suitable for the classification of the social message
that is characterized by its shortness. For example, the use of the
traditional “Bag-Of-Words” method to classify tweets may fail to achieve good classification rates. This limitation has attracted
the attention of many researchers who developed several approaches.
The authors in <cit.> classified tweets to “News”, “Events”,
“Opinions”, “Deals” and “Private Messages” using a set of
features among them author information and features extracted from
the tweet. In <cit.> and <cit.>, the authors propose
approaches for short text clustering that use not only the content
of the text but also an additional set of items that is extracted
from an external source of information like Wikipedia and World Knowledge.
Also, <cit.> classify short and sparse text using a large
scale external data collected from Wikipedia and MEDLINE.
Social messages are, also, classified for sentiment analysis and opinion
mining purposes <cit.>. The task here, is to identify the dominant
opinion about a product or a brand using text mining techniques. The
author of <cit.> used 3516 tweets to identify costumer's
sentiment about some well known brands. In <cit.>, authors used
text published on Twitter and Facebook to analyze the opinion about
three chain of pizza. The reader can refer to <cit.> for
a recent survey.
Our work is different from all of the above in that we propose to
classify the social message without access to its content. In fact,
we predict the class of the message by interpreting its propagation
traces through the social network. We think that the proposed approaches
will be useful in the case where there is no access to the content
of the message or when text based methods are unable to classify the
message due to its shortness.
§.§ Propagation based approaches
Now we move to present two methods
that were used to classify propagation networks and that were published
in <cit.>. The first method uses the probability theory
and the second one incorporates the theory of belief functions. As
we said above, existing classification approaches that are used for
text classification and characterization, always, have some limitation
with short text. To overcome this limitation, we propose to classify
the propagation traces of the message instead of its content. For
an illustrative example, when you receive a letter from your bank,
it is likely to be about your bank account.
The PrNet classifiers work in two main steps, the first step, is used
to learn the model parameters and the second step, uses the learned
model to classify new coming messages (propagation network of the
message). Both methods have the same principle in the two steps. In
the parameter learning step, we need a set of propagation networks,
PrNetSet that is used to estimate a probability distribution defined
on types of links for each level
[We call propagation level the number of links between the source of
the message and the target node.].
In the belief PrNet classifier, we use the consonant transformation
algorithm, also called inverse pignistic transformation, <cit.>
that allows us to transform the probability distribution (output of
the probabilistic parameter learning step) to a BBA distribution while
preserving the least commitment principle <cit.>. Once model's
parameters are learned, we can use it to classify a new message (propagation
network of the message). The reader can refer to <cit.>
for more details.
These classifiers need a transit step through a compact structure that assigns a probability distribution to each propagation level. This step leads to a loss of information that may be significant in the classification step. Another drawback is that these methods do not work with continuous types of links and a discretization step is always needed in such a case. We think that the proposed PrNet-DTW classifiers will avoid these problems.
§ BACKGROUND
§.§ Theory of belief functions
The Upper and Lower probabilities <cit.> is the first ancestor of the evidence
theory, also called Dempster-Shafer theory or theory of belief functions.
Then <cit.> introduced the mathematical theory of
evidence and defined the basic mathematical framework of the evidence
theory, often called Shafer model. The main goal of the Dempster-Shafer
theory is to achieve more precise, reliable and coherent information.
Let Ω={ s_1,s_2,...,s_n} be the frame
of discernment. The basic belief assignment (BBA), m^Ω,
represents the agent belief on Ω. m^Ω(A)
is the mass value assigned to A⊆Ω, it must respect:
∑_A⊆Ωm^Ω(A)=1. In the case
where we have m^Ω(A)>0, A is called focal set of m^Ω.
Combination rules are the main tools that can be used for information
fusion. In fact, in real world applications, we do not have the same
kind of information to be combined, that's why the same combination
rule may performs well in some applications and may gives unsatisfiable
results with other applications. Among these combination rules, we find the Dempster's rule <cit.>, the conjunctive rule of combination (CRC) <cit.> and the disjunctive rule of combination (DRC) <cit.>.
§.§ k Nearest Neighbors
In this paper, we choose the k nearest neighbors classification technique because it is distance based. It will be used to classify propagation traces of social messages together with the proposed distance. In this section we present two k-NN based approaches which are the probabilistic k-NN and the evidential k-NN.
Probabilistic k nearest neighbors (k-NN)
is a well known supervised method that is generally used for classification.
It needs as input a set of training examples that we know their features
values and their classes, and of course the object to be classified.
Besides we have to specify a measure of distance that will be used
to quantify the matching between the new object x and every object in
the training set. First, the k-NN starts by computing the distance between
x and every object in the training set, then, it selects the k nearest neighbors, i.e. that have the shortest distance with x. Finally, the object x is
classified according to the majority vote principle, i.e. the algorithm chooses the class that has the maximum occurrence count in the k nearest neighbors set to be the class of x. The k-NN technique
is surveyed in <cit.>.
Evidential k Nearest Neighbors is an extension
of the probabilistic k-NN to the theory of belief functions <cit.>. The
probabilistic k-NN uses distances between the object x, to be
classified, and objects in the training set to sort the training example,
then it chooses the k nearest neighbors to x. However, according
to <cit.>, the distance value between x and its nearest
neighbors may be significant. The evidential k-NN differs from the probabilistic one in the decision
rule. Let Ω={ s_1,s_2,...,s_n} the set of
all possible classes, be our frame of discernment and d_j be
the distance between x and the j^th nearest neighbor. The
idea behind the evidential k-NN consists on representing each object
of the k neighbors by a BBA distribution defined by:
m({ s_i}) = α
m(Ω) = 1-α
m(A) = 0 ∀ A∈2^C∖{ C_i}
such that 0<α<1. If d_j is big, α have to be
small. Then it will be calculated as follow:
α = α_0Φ_i(d_j)
Φ_i(d_j) = e^-γ_id_j^β
where γ_i>0 and β∈{ 1,2,…}.
After estimating a BBA distribution for each nearest neighbor, the
decision about the class of x is made according to the following steps; first
we combine all BBA distributions using a combination rule. Second,
we apply the pignistic transformation, <cit.>, in order
to obtain a pignistic probability distribution. And finally, we choose
the class that have the biggest pignistic probability. In the next
section, we will introduce the dynamic time warping distance and its
extension to compute similarity between propagation networks.
§ PROPOSED DYNAMIC TIME WARPING DISTANCE FOR PROPAGATION NETWORKS SIMILARITY
The propagation network is a graph based data structure that is used
to store propagation traces of a message. The PrNet has two main characteristics that distinguish it from an ordinary DAG[Directed Acyclic Graph]; first, its arcs are weighted
by the type of the relationship between users, and second, its paths are time dependent. In this paper, we choose to use distance based classifiers; the probabilistic and the evidential k-NN, then, we need to measure the distance between the PrNet to be classified and the training set. In <cit.>, we presented two PrNet classifiers that are based on mathematical distances like the Euclidean distance and the Jaccard distance. This solution need to transform the PrNet to a set of probability or BBA distributions, then it computes the distance between those distributions instead of PrNets. This transformation
may lead to a loss of the information. A second solution may be to
use a graph distance metric to measure the similarity between PrNets.
In the literature, we found several distances like Graph
edit distances <cit.>, and Maximal common sub-graph
based distances <cit.>. However, all these distances do
not consider the time dimension which is a character of the PrNet.
Then comes the need of a new distance that is adapted to weighted
time dependent DAGs like the PrNet. As a solution to this problem
we propose the Dynamic Time Warping distance for propagation networks
similarity (PrNet-DTW).
The Dynamic Time Warping similarity measure <cit.> was first
proposed for speech recognition, it consider the fact that the speech
is time dependent. Recently, <cit.> propose to use it
to measure the similarity between two sequences, i.e. a sequence
is an ordered list of elements. DTW distance is used to
consider the order of appearance of each element in the sequences
while computing the distance between them. Let A=(a_1,a_2,…,a_S)
and B=(b_1,b_2,…,b_T) be two sequences. DTW(A_i,B_j)
is the DTW distance between A and B and it is defined as <cit.>:
DTW(A_i,B_j)=δ(a_i,b_j)+min
DTW(A_i-1,B_j-1)
DTW(A_i,B_j-1)
DTW(A_i-1,B_j)
Note that δ(a_i,b_j) is a the distance between
the two elements a_i∈ A and b_j∈ B. As mentioned in <cit.>, the implementation
of this recursive function leads to exponential temporal complexity.
They propose the memoization technique
as a solution to speed up
the computation. Hence, we need a | S|×| T| matrix
in which we record previous results in order to avoid their computation
in next iterations. This computation technique maintain the time and
space complexity of the DTW distance to O(| S|×| T|).
The PrNet-DTW distance is used to measure the distance
between two propagation networks. In the first step, we transform
each PrNet to a set of dipaths. We define a dipath as a finite sequence vertices connected with arcs that are directed to the
same direction (line 1 and 2 in algorithm <ref>).
We note that all dipaths starts from the source of the message. In the second step,
the PrNet-DTW algorithm loops on the DipathSet1,
at each iteration, it fixes a Dipath and compute
its DTW distance with all Dipaths in DipathSet2 and it takes the minimal
value. Finally, it computes the mean of minimal distances between
Dipaths in DipathSet1 and those in DipathSet2 to be the PrNet-DTW distance. Details are shown
in algorithm <ref>. We choose the k-NN algorithm
and evidential k-NN algorithm to classify propagation networks
because they are distance based classifiers and they can be used
with the proposed PrNet-DTW distance.
InputinputOutputoutput
PrNet1 and PrNet2: Two propagation networks
Distance: The distance between PrNet1 and
PrNet2.
DipathSet1← PrNet1.TransformToDipathSet()
DipathSet2← PrNet2.TransformToDipathSet()
i=1 DipathSet1.size()
D← maxValue
j=1 DipathSet2.size()
D←min(D, DTW(DipathSet1.get(i), DipathSet2.get(j)))
Distance← Distance+D
Distance← Distance/DipathSet1.Size();
PrNet-DTW algorithm
§ EXPERIMENTS AND RESULTS
We used the library Twitter4j
[Twitter4j is a java library for the Twitter API, it is an open-sourced
software and free of charge and it was created by Yusuke Yamamoto.
More details can be found in http://twitter4j.org/en/index.html.
] which is a java implementation of the Twitter API to collect Twitter
data. We crawled the Twitter network for the period between 08/09/2014
and 03/11/2014. After a data cleaning step, we got our data set that
contains tweets of three different classes: “Android”,
“Galaxy” and “Windows”. To simplify the tweet classification step, we consider a tweet that contains the name of a class C, for example a tweet that contains the word “Android”, of type that class C, i.e. the class “Android” in our example. Table <ref> presents some statistics about the data set.
The remainder of this section is organized as follow: we present
our experiments configuration, the method with which we extracted
propagation and the computation process of link weights. Then,
we compare the proposed classifiers with those of <cit.>.
§.§ Experiments configuration
In our experiments, we need to extract propagation traces of each
type of message. Here, we consider that a tweet of type a was propagated
from a user u to a user v if and only if u posts a tweet of type a
before v and at least one of these relations between u and v
exists: 1) v follows u, 2) u mentions v in a tweet of type
a, 3) v retweets a tweet of type a written by u. After
getting propagation traces we extract propagation networks such that
each PrNet has to have one source.
We define types of links that are used to measure the similarity
between propagation networks. In Twitter social network there are
three possible relations the first one is explicit which is the follow
relation, the second and the third relations are implicit which are
the mention and the retweet. Another property of Twitter, is that
between two users u and v we can have a follow, a mention and/or
a retweet relation. We assign to each of those a weight <cit.>
and we assign to each link a vector of weights that has the form (w_f,w_m,w_r).
Let S_u be the set of successor of u, P_u the set of
predecessor of u, T_u the set of tweets of u, R_u(v)
the set of tweets of u that were retweeted by v, M_u(v)
the set of tweets of u in which v was mentioned and M_u
the set of tweets in which u mentions another user. We compute weights
<cit.> as follow:
* Follow relation: w_f(u,v)=| S_u∩(P_u∩{ u})|/| S_u|
* Mention relation: w_m(u,v)=| M_u(v)|/| M_u|
* Retweet relation: w_r(u,v)=| R_u(v)|/| T_u|
Finally, we choose the euclidean distance to evaluate the δ(a_i,b_j) in the computation process of the PrNet-DTW.
§.§ Experiments evaluation
In our experiments, we want to evaluate the performance of the PrNet-DTW
distance, then, we integrate it in the k-NN and
the evidential k-NN classifiers and we compare the proposed classifiers
with those proposed in <cit.>. As PrNet classifiers works
with a discrete types of links <cit.>, a discretization
step was needed, i.e. if the weight value (w_f,w_m or w_r) is greater than 0 we replace it by 1 in the discrete weight vector elsewere we replace it by 0.
For example, if the link is weighted by the vector
(w_f=0.5, w_m=0, w_r=0.25), the output after the discretization step will be (1, 0, 1).
In the remainder of our experiments, we divide, randomly, our data set
into two subsets; the first one contains 90% of PrNets and it is used
for training and the second one (10%) is used for testing.
The algorithm k-NN is known to be dependent to k value, and
varying k may vary the classification accuracy. Then, to see the
impact of the parameter k, we made this experiment; we run our
k-NN based algorithms with multiple k values and we obtained results
in Figure <ref>. We note that odd values are more
appropriate to k when we use PrNet-DTW Probabilistic k-NN. Moreover,
the PrNet-DTW belief k-NN has not the same behavior as the PN-DTW
Probabilistic k-NN. In fact, the curve of the evidential classifier
is more stable than the curve of the probabilistic one and the variation
of the value of k does not have a great effect on the classification
accuracy.
A second experiment was done to evaluate and compare the proposed classification
methods. We fixed the parameter k to 5 and we obtained results
in table <ref>. As shown in table <ref>,
the probabilistic and the belief classifiers do not give good classification
accuracy, this behavior is a consequence of the discretization step
that leads to the loss of the information given by weights values.
In contrast, the PrNet-DTW based classifiers show their performance,
indeed, we have got good accuracy rates: 88.69% (±3.39, for a 95% confidence
interval) and 89.92% (±3.20) respectively. We see also that the
PrNet-DTW belief classifier gives slightly better results.
§ CONCLUSION
To sum up, we presented a new distance metric that we called
PrNet-DTW. Our measure is used to quantify the distance between propagation
networks. Also, we showed the performance of our measure in the process
of classification of propagation networks, indeed, we defined two
classification approaches that uses the PrNet-DTW measure which are
the probabilistic k-NN and the evidential k-NN.
For future works, we will search to improve the PrNet-DTW based classifiers
by taking into account the content of the message to be classified,
in fact, we believe that a classification approach that uses information
about the content of the message and information about its propagation
will further improve the results.
§ ACKNOWLEDGEMENT
These research works and innovation are carried out within the framework
of the device MOBIDOC financed by the European Union under the PASRI
program and administrated by the ANPR. Also, we thank the "Centre d'Etude et de Recherche des Tlcommunications" (CERT) for their support.
splncs03
|
http://arxiv.org/abs/1701.08034v1 | 20170127124252 | Scalable Attestation Resilient to Physical Attacks for Embedded Devices in Mesh Networks | [
"Florian Kohnhäuser",
"Niklas Büscher",
"Sebastian Gabmeyer",
"Stefan Katzenbeisser"
] | cs.CR | [
"cs.CR"
] |
acmcopyright
2
Scalable Attestation Resilient to Physical Attacks
for Embedded Devices in Mesh Networks
Florian Kohnhäuser
Technische Unversitat Darmstadt, Germany
kohnhaeuser@seceng.informatik.tu-darmstadt.de
Niklas Büscher
Technische Unversitat Darmstadt, Germany
buescher@seceng.informatik.tu-darmstadt.de
Sebastian Gabmeyer
Technische Unversitat Darmstadt, Germany
gabmeyer@seceng.informatik.tu-darmstadt.de
Stefan Katzenbeisser
Technische Unversitat Darmstadt, Germany
katzenbeisser@seceng.informatik.tu-darmstadt.de
January 27, 2017
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Interconnected embedded devices are increasingly used in various scenarios, including industrial control, building automation, or emergency communication. As these systems commonly process sensitive information or perform safety critical tasks, they become appealing targets for cyber attacks. A promising technique to remotely verify the safe and secure operation of networked embedded devices is remote attestation. However, existing attestation protocols only protect against software attacks or show very limited scalability. In this paper, we present the first scalable attestation protocol for interconnected embedded devices that is resilient to physical attacks. Based on the assumption that physical attacks require an adversary to capture and disable devices for some time, our protocol identifies devices with compromised hardware and software. Compared to existing solutions, our protocol reduces communication complexity and runtimes by orders of magnitude, precisely identifies compromised devices, supports highly dynamic and partitioned network topologies, and is robust against failures. We show the security of our protocol and evaluate it in static as well as dynamic network topologies. Our results demonstrate that our protocol is highly efficient in well-connected networks and robust to network disruptions.
<ccs2012>
<concept>
<concept_id>10002978.10002991.10002993</concept_id>
<concept_desc>Security and privacy Access control</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002978.10003014.10003017</concept_id>
<concept_desc>Security and privacy Mobile and wireless security</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003033.10003034</concept_id>
<concept_desc>Networks Network architectures</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Security and privacy Access control
[500]Security and privacy Mobile and wireless security
[500]Networks Network architectures
definitionDefinition
theoremTheorem
TEETrusted Execution Environment
ROMRead-Only Memory
MPUMemory Protection Unit
DoSDenial of Service
§ INTRODUCTION
Nowadays, networked embedded devices are increasingly present in every aspect of our lives. This paradigm, often referred to as the Internet of Things (IoT), is expected to constantly evolve in scale and complexity, reaching 20.8 billion devices by 2020 <cit.>. Technologies like Bluetooth Smart, IEEE 802.15.4, Wi-Fi Direct, ZigBee, or Z-Wave enable embedded devices to form large wireless mobile ad hoc networks (MANETs). In MANETs, all devices cooperate in the distribution of data in the network, thus establishing a decentralized and self-organized network topology. Interconnected embedded devices are frequently used in industrial control, building automation, military communication, or sensor networks. As such systems often process privacy-sensitive information or perform safety-critical tasks, their malfunction or misuse can cause serious damage. Unfortunately, software for embedded systems is typically written in unsafe programming languages and often reluctantly maintained. Additionally, even though an adversary requires significant resources to physically tamper with a device <cit.>, (secure) hardware on embedded systems is usually not hardened against physical tampering; thus, interconnected embedded devices are appealing targets for cyber attacks <cit.>.
To detect and mitigate such attacks, it is important to monitor the correct operation of embedded devices and detect any malfunctioning or misuse as early as possible. For this purpose, attestation protocols have been introduced, which allow a third party, the verifier, to check the integrity of a remote device, the prover. Since traditional single device attestation protocols are impractical in large mesh networks due to their overhead of attesting each device individually, scalable attestation protocols have recently been proposed <cit.>. These protocols perform an efficient attestation of large networks by distributing the attestation burden across all devices in the network. All scalable attestation protocols are based on the assumption that an adversary can only manipulate the software of provers. Thus, they cannot withstand an adversary who is able to perform physical attacks and tamper with the hardware of provers. Yet, an adversary can rather easily capture a device and tamper with its hardware as devices forming MANETs are often distributed over wide public areas and consist of a multitude of devices. Hence, a scalable attestation protocol that is resilient to physical attacks is much needed.
Ibrahim et al. <cit.> presented a first approach to solve this problem by combining existing scalable attestation approaches <cit.> with absent detection <cit.> to detect both software and hardware attacks. The absent detection protocol is based on the assumption that a strong adversary, who physically tampers with a device, must temporarily take the device offline for a certain amount of time, e.g., to disassemble the device and extract secret keys <cit.>. To detect offline and thus physically compromised devices, each device periodically emits a heartbeat that needs to be received, verified, and logged by every other device in the network. Although a functional solution to the problem, the protocol suffers from several shortcomings. First, the amount of exchanged messages per heartbeat period scales quadratically with the number of devices in the network. This causes scalability issues in large networks with respect to network communication, energy consumption, and runtime performance. Furthermore, the protocol is very error-prone, since a single defective transmission of a heartbeat suffices to cause a false positive, where a healthy device is mistakenly regarded as compromised. Aggravating this, the protocol is only able to attest the state of the overall network and cannot identify particular compromised devices. Hence, a single false positive causes the entire network to be considered as compromised. Finally, the protocol relies on the assumption that during protocol execution the network topology is static and connected, which is a very strong limitation for wireless mesh networks.
In this paper, we present the first scalable attestation protocol () for interconnected embedded devices that is resilient to physical attacks. To protect against strong adversaries, we build on the established assumption that an adversary needs to take a device offline to physically tamper with it <cit.>. In our protocol, a single leader device periodically emits a new heartbeat that is propagated in the network. To obtain the newest heartbeat from a neighboring device, a device must authenticate itself with the previous heartbeat. Since a device that is under physical attack has to be absent for at least one heartbeat period, it will miss this period's heartbeat and thus be unable to obtain any further heartbeats. To prevent a collusion between compromised devices, heartbeats are stored in lightweight secure hardware and transmitted encrypted via secure channels. During the actual attestation, devices that fail to authenticate with the newest heartbeat are regarded as physically compromised, whereas devices with a compromised software are detected based on existing software attestation techniques. In case of an outage of the leader, a new leader device is determined through a leader election process. By optionally storing the attestation result in each device, our protocol is able to efficiently attest highly dynamic and partitioned network topologies.
We show that our protocol is secure against an adversary who compromises all but one device in the network. Finally, we demonstrate the practicability of our protocol in static and dynamic networks. In summary, provides the following improvements over existing work:
* can precisely identify devices whose hardware and/or software is compromised, if less than half of all devices in the network are compromised.
* is very efficient. Compared to the best previous work <cit.>, we reduce the number of sent messages per time period from O(n^2) to O(n)[In fact, when detecting physically compromised devices through their absence, O(n) transmitted messages per time period is the best possible solution, since each device must at least send or receive one message to show that it is present.], thus, achieving scalability to millions of devices (where n denotes the total number of devices in the network).
* is robust against network and device failures by
(1) relying on a one-to-many delay-tolerant link in contrast to a many-to-many continuous link, as used in the best previous work <cit.>, and
(2) offering a recovery mechanism, the leader election protocol, that minimizes the amount of false negatives.
* provides a novel efficient aggregation scheme, e.g., attests of 4,000 devices fit into 1kB. This allows to attest highly dynamic and partitioned network topologies efficiently.
* is the first scalable attestation protocol that is evaluated in dynamic network topologies.
Outline The rest of the paper is organized as follows. In <ref> we summarize existing work. In <ref> the system model, device requirements, and adversary model are presented. In <ref>, we describe our novel attestation approach to detect physically compromised devices. Then, in <ref> we extend the attestation protocol to execute a recovery protocol on failures, verify the software integrity of devices, and support dynamic topologies during attestation. The performance of is evaluated in <ref>. Finally, we conclude in <ref>.
§ RELATED WORK
Device Attestation
Remote attestation is a mechanism that allows a third party, the
verifier, to check the integrity of a remote system, the
prover. Protocols that target the attestation of a single embedded
device are either
software-based <cit.> or
hardware-based <cit.>. Software-based techniques
require no secure hardware, but rely on assumptions that have been
shown to be hard to achieve in
practice <cit.>. Hardware-based attestation
mechanisms provide much stronger security guarantees by relying on
lightweight security architectures. Nevertheless, single-device
approaches are impractical in mesh networks due to the large overhead
of attesting each device individually.
Recently, protocols started to focus on an efficient attestation of
multiple embedded devices. Park et al. <cit.> proposed
to compare the integrity measurements of multiple devices. Yet, their
approach requires identical devices and only enables a probabilistic
attack detection rate. Asokan et al. <cit.> present a
highly efficient attestation scheme for large-scale networks of
embedded devices that requires only ROM and a simple MPU. In
their scheme, each device attests its neighbors and reports the
aggregated result back to its parent, eventually received by the
verifier. Ambrosin et al. <cit.> enhance this work
by introducing a novel signature scheme that enables anyone to
publicly verify the attestation result and allows the network to
contain untrustworthy aggregator devices, such as routers or cloud
severs. Yet, besides the work by Ibrahim et. al <cit.>, which has been discussed in <ref>, existing works consider the adversary to compromise only the software on devices. In mesh networks, this assumption may not hold, since an adversary can comparatively easy capture a device and physically tamper with it.
Capture Detection
Several works have been proposed on the detection of node capture attacks, where an adversary physically approaches and manipulates a device. They all build on the assumption that an adversary needs to take a device offline, in order to tamper with it <cit.>. Conti et al. suggested that a node is collaboratively flagged as captured if it fails to re-meet with any other node within a fixed time interval <cit.>. In the approach by Ho <cit.>, nodes use statistical methods to detect absent neighbor devices in static network topologies. Recently, Agrwal et al. proposed to deploy multiple TPM-equipped cluster heads in the network, which check the integrity of the software as well as the physical presence of all nodes in the cluster <cit.>. Nevertheless, existing approaches are unable to detect devices with compromised software <cit.>, require the deployment of additional hardware <cit.>, are only applicable in static network topologies <cit.>, or lack scalability <cit.>.
Secure Data Aggregation Since ad hoc networks are often deployed to collect sensory data, many efficient and integrity-preserving aggregation schemes for mesh networks have been proposed. Unfortunately, these schemes rely on very costly asymmetric cryptographic operations <cit.>,
require to maintain a specific network topology during aggregation <cit.>, or need multiple communication rounds <cit.>, which both is undesirable, as
it leads to communication overhead in dynamic network topologies. Thus, a lightweight aggregation scheme suitable for remote attestation of embedded devices that supports dynamic topologies and allows the identification of compromised devices is missing.
§ PRELIMINARIES
System Model
In our model, we consider embedded devices that can be heterogeneous in terms of hardware capabilities and software resources, e.g., devices with different software, computational power, storage capacity, or security functionalities. All embedded devices are connected in a mesh network topology. This topology can be static, where devices remain stationary and the network is connected, or dynamic, where devices can move freely and the network can be temporarily partitioned. However, in dynamic network topologies, we assume that devices meet each other regularly due to their mobility. Devices that are unreachable for some time are regarded as compromised, since it is uncertain whether they will ever contribute to the network again. We further assume that each device i gets initialized and deployed by a trusted network operator 𝒪, once ( <ref>).
After deployment, the goal of 𝒪 is to ensure the correct and safe operation of all devices 1, 2, ..., n in the network. Therefore, 𝒪 regularly verifies the integrity of all devices by executing the proposed attestation protocol. The attestation protocol determines all devices whose software is in a trustworthy, i.e., unmanipulated and up-to-date, state and whose hardware has not been tampered with. We refer to these devices as healthy devices, in contrast to compromised devices. Executing the protocol, 𝒪 is able to learn the precise identity of all healthy and all compromised devices. This may serve as a first step towards physically locating and recovering compromised devices. In order to perform the attestation protocol, 𝒪 requires a connection to at least one device in the network.
Device Requirements
We assume that each device i provides the minimal hardware properties for remote attestation, according to the work by Francillon et al. <cit.>. In practice, these properties can be implemented with ROM and a simple MPU. ROM stores the protocol code and cryptographic keys, and the MPU ensures an uninterruptible execution of the protocol code and allows only protocol code to access the cryptographic keys. Recently, it has been shown that these minimal hardware properties are available even on many low-cost commodity embedded devices <cit.>. Additionally, our attestation protocol relies on authentic time measurements. In order to prevent malware from tampering with the device clock, each device must provide a write-protected real-time clock. Protected real-time clocks are already built-in many existing commodity embedded devices <cit.>. We henceforth refer to the execution space, where all required hardware properties are fulfilled, as TEE.
Adversary Model
In this work, we regard a powerful adversary , who is able to
mount attacks on the network as well as the software and hardware of devices.
In detail, is granted full control over all
messages in the network (Dolev-Yao model).
Thus, can eavesdrop, modify,
delete, or synthesize all message between any two entities.[We note that the model allows DoS attacks, such as jamming or cutting wires. These attacks cannot be prevented against a physically present adversary. However, DoS attacks have no influence on the security of our scheme, as cannot use them to forge a healthy system state.]
Moreover, is allowed to compromise the software of all
devices in the network. This gives full control over the devices'
execution state and storage, yet, no access to the protected contents
inside the TEE.
We further allow to capture and physically tamper with up to all
but one device in the network, when attesting the overall network state,
and up to half of all devices in the network, when knowledge on the
precise identity of compromised devices is required.
For the physically compromised devices,
is able to access device secrets and code inside the TEE
and is allowed to manipulate the clock. We note that it is impossible
to guarantee a secure device attestation, if all devices in the network
have physically been compromised <cit.>.
Finally, as in <cit.> we assume that mounting a physical attack
requires at least a time , in which the device is offline,
e.g., to decapsulate the device and to launch a microprobing attack.
Depending on the device's level of tamper resistance and the
adversaries resources, such attacks typically require hours up to
weeks in specialized laboratory environments <cit.>.
§
In the following, we describe the protocol, which identifies devices in the network have physically been tampered with. Note that the detection of hybrid attacks, i.e., attacks that target hardware and software, is discussed in the next section ( <ref>). consists of three different phases. In the initialization phase ( <ref>), the trusted network operator 𝒪 initializes each device once, before the deployment of the network. The heartbeat phase ( <ref>) is periodically executed during the operation of the network. In this phase, all physically uncompromised devices maintain a valid state by sharing a common group key, namely the heartbeat. We will show how the heartbeat is periodically regenerated and propagated in the network and demonstrate that physically compromised devices are unable to obtain the heartbeat. Finally, in the attestation phase ( <ref>), 𝒪 initiates an attestation of the network and obtains a report, which exhibits all physically compromised devices.
§.§ Initialization Phase
Preliminaries
Devices can either be in a healthy or compromised hardware
state. We discretize the time into non-overlapping time periods
t ∈{1,2,3,...} of fixed length . We reference the starting times
of each time period with T_1, T_2, T_3, .... The real time T_clock can be
read by any device from a reliable read only clock , which for simplicity
is assumed to be synchronized between all devices. Each devices keeps track of
the current time period t, running from time T_t until T_t+1. In the
remainder of this section, we assume an implementation of a function
(t) that returns a constant , iff the real time is within the
time period indicated by parameter t, i.e, T_t ≤ T_clock < T_t+1
and otherwise .
Enrollment
In the enrollment phase, the network operator 𝒪 initializes the
TEE of all devices with the following secrets. First, devices store two
initial heartbeats and , which function as a group secret between
all healthy devices.
Second, each device is equipped with a device-dependent symmetric key _i,
used during attestation to generate a device unique attest,
and an asymmetric key pair (_i, _i), employed to establish
secure channels between devices. Finally, devices record the current time period
t, their own device identifier i, and the identifier of the leader device min, which is the
first device 1 in the network. Table <ref> provides
a summary of relevant definitions.
For explanatory reasons, we assume an initial enrollment of all devices. However, also allows devices to be enrolled at any point in time by issuing the current heartbeat.
§.§ Heartbeat Phase
Basic Idea
The heartbeat protocol is the core protocol of our approach. It excludes
devices from the network that are offline for more than one time period and, hence,
are assumed to be physically tampered with. During protocol execution,
a so-called leader device emits a new secret group key, named
heartbeat, that is propagated in the network. Obtaining this heartbeat
requires a device to authenticate with the heartbeat of the previous time period.
Therefore, devices that are offline in an arbitrary time period T_a miss the
heartbeat that is propagated in T_a and thus are unable to obtain a heartbeat
in any subsequent time period T_a+1, T_a+2, .... Since any communication
between devices in all protocols is secured using the newest heartbeat as a key,
physically compromised devices are unable to participate any more.
In the following, we describe the heartbeat transmission protocol, formalized in
Figure <ref>, which is run between two neighboring devices to
transfer the heartbeat from one device to the other.
Heartbeat Transmission Protocol
The emission of the new heartbeat in every time period is initialized
by the leader device. As soon as the leader
observes that the real time T_clock has reached the start of a new
time period ((t) returns ), the leader first
updates the heartbeat of the current time period to the most recently
exchanged heartbeat .
We remark that heartbeats could also be indexed by the time period in which
they are active in, e.g., _1,_2, _3, …. However,
as only two heartbeats are relevant for any device, only these two, i.e.,
the current and next heartbeat, are stored and referenced.
After updating the current heartbeat, the leader samples a new heartbeat
for the subsequent period t+1 and increments its time pointer t by one.
Consequently, the time period described by the
pointer is now ahead of the real time T_t > T_clock.
A time pointer ahead of the real time indicates a device that it
is in possession of a heartbeat for the upcoming time period.
The leader initialization code is illustrated below.
t:
←
^n
← + 1
broadcast(_new)
Next, the leader informs its
neighbors about the new heartbeat with a message _new.
For simplicity, we henceforth assume that two neighboring devices
have already established a shared secret _ij by performing
a key exchange using their public keys authenticated with the current heartbeat.
On receiving
_new from any device i, a device j will enter
its TEE and check whether the next time period has been reached.
If this is the case, j will update
its current heartbeat to the previously communicated one. Afterwards,
j encrypts a fixed string, e.g., '0', under the current
heartbeat XOR-ed with the channel key _ij shared by
both devices and sends the result to i. We refer to this
XOR-ed key, as the session key.
A healthy i can decrypt the message by also computing the session
key. A successful decryption proves that j is in possession of
the current heartbeat (and the channel key) and is therefore eligible for the
next heartbeat. Then, i answers with a
message _hb containing the next heartbeat , also encrypted
with the session key. On successful decryption, device j stores the
new heartbeat as . Afterwards j increments its time
period pointer and then announces this new heartbeat to its neighbors with
_new. Figure <ref> illustrates the heartbeat
transmission phase in a network with 6 healthy devices and one adversarial
device A that was physically compromised in time period t = 2.
We note, that the heartbeat protocol relies on the availability of the leader
device, which constitutes a single point of failure. In
<ref> we present an extension that makes
the heartbeat protocol more robust against device outages, network
partitioning, or targeted denial of service attacks.
§.§ Attestation Phase
Basic idea. The attestation protocol allows the operator
𝒪 to check the state of all devices in the network. For this purpose,
𝒪 issues an attestation request that is answered by all devices with
an attestation report. Propagating the attestation request through the network arranges a spanning tree whose root is 𝒪. This enables an efficient transmission and aggregation of attestation reports along the spanning tree to 𝒪.
supports two variants of attestation. The first variant allows to attest
the overall network state and is secure against an adversary who compromises all but one device.
However, it only outputs a Boolean result, namely whether all devices are healthy or not.
The second variant precisely identifies compromised devices by id and in this way
increases the protocol's robustness and applicability in practice. Yet, it requires
more than half of all devices in the network to be healthy.
Attestation protocol. The protocol is formalized in
Figure <ref>. The operator
𝒪 initially connects to a device i in the network and emits
an attestation request. The request contains the concatenation of a current
timestamp ts and the number of devices n in the network,
encrypted under the device's key _i,
which is only shared between i and 𝒪.
By verifying the authenticity and timeliness of the request
(ts),
denial of service attacks through replays can be prevented.
Next, the attestation request, consisting of the concatenation of ts and n,
is propagated by i to its neighboring devices. This and all following
communication between two devices is secured with the pairwise session key,
i.e., the current heartbeat XOR-ed with the channel key. Any
device that receives an attestation request first verifies the request and
then also propagates the request to its neighboring devices. These
steps are repeated until the attestation request reaches devices, whose
neighbors already have received the request. In this way, a spanning tree is
constructed. Leaf devices that cannot propagate the request any further
return an attestation report to their parent device from which they initially obtained the attestation request.
The attestation report contains their own attest, which consists of ts encrypted under their own device key.
Every non leaf device merges its own attest (and identifier) with all received attestation
reports and propagates the merged report to its parent device.
Eventually, i merges a final report that contains all healthy devices
in the network. This final report is encrypted under _i and transmitted
to 𝒪, who verifies the report, as described in the next paragraph.
We note that the attestation must be completed in time
or 𝒪 has to periodically check the presence of i
during attestation.
Otherwise, can physically tamper with i to extract an aggregate
and induce attests of physically compromised devices.
Figure <ref>
illustrates the attestation phase in a network with 6 healthy devices and one
adversary device A that was physically compromised.
Report Aggregation and Merging
An aggregated attestation report consists of two parts. The first part contains
a description of all device identifiers that are in the aggregate. The second part
consists of the aggregated attests.
For a small number of devices, the description is a list of device identifiers,
else it is an n-bit vector, where a one at position
k indicates that k is contained in the aggregate.
The attests themselves are aggregated by XOR-ing all
individual attests.
Multiple attestation reports are aggregated by merging their device descriptions and XOR-ing their aggregated attests.
When attesting the overall network state, the attestation report consists of only
the aggregate, as a device identification is not required.
This decreases the size of the report significantly ( <ref>).
Therefore, to increase efficiency, it is useful to run the attestation
with precise device identification only, if an attestation of the overall network state fails.
Report Verification
Given a device description, 𝒪 recomputes the attests for all devices,
whose id is contained in the description. Given no description, 𝒪 recomputes
the attests for all devices.
If the recomputed aggregate equals the reported aggregate and if at least
n/2 attests are included in the report, then the report is assumed to
be valid. Only then, all attested devices are assumed to be healthy and
the verification returns a bit vector, where a zero/one at position k
indicates that k is compromised/healthy.
§.§ Security Analysis
Intuitively, an attestation protocol is secure, when the
network operator 𝒪 will testify a healthy system state, if not
a single device has physically been compromised. We refer to such
an attestation scheme as non-informative secure.
Moreover, an informative secure attestation protocol allows 𝒪 to
distinguish between healthy and compromised devices.
We follow the idea of Asokan et al. <cit.> and
prove the security of our protocol by an adversarial experiment
(k). In this experiment, the adversary is given access
to a network of n initialized devices that execute
the heartbeat and attestation protocol.
can interact with all devices
according to the attacker model presented in <ref>.
Moreover, we assume any adversary to be computationally bound (PPT).
Hence, is able to interact a polynomial number of times k
with devices in the network (and the authenticated encryption scheme).
Furthermore, is allowed to trigger
and observe attestations by 𝒪.
After at most k interactions, a final attestation is initiated by 𝒪.
The output of (k) is then a bit vector returned by 𝒪 after
verification of the final request. A bit vector with only zeros
indicates a compromised network, whereas every bit set to one indicates
a healthy device, cf. <ref>.
We capture the intuitive idea of secure attestation in
the following definition.
Secure Attestation Scheme. An network attestation scheme
for n devices is secure if
[(k) = 1^n] ≤ negl(k)
for any PPT and 0 < c < n, where c is the number of compromised devices.
An attestation scheme is informative and secure if
[(k)[j] = 1] ≤ negl(k)
for any PPT and every compromised
device j, where [j] is the j'th bit in the result vector
and the total number of compromised devices c is
less than n/2.
Note that the definition of a non-informative secure attestation scheme
is similar to the definition given in <cit.>, which is
defined without device identification in mind.
Security of
The security of is summarized
in Theorem <ref>.
is an informative and secure attestation protocol when the
length of a heartbeat period is at most /2, assuming
security of the PRNG and authenticated encryption
scheme that guarantees confidentiality (IND-CPA) and
authenticity (INT-CTXT).
In the following paragraphs,
we sketch a proof to show that is an informative secure
attestation scheme.
The sketch is split in two parts. First, we sketch a proof for
Theorem <ref>, which formalizes the security
of the heartbeat protocol, before arguing the security of
the full protocol.
Any PPT is unable to gain access to any heartbeat _t, which is used to
secure the communication in time period t, before
time period t+1, assuming < /2, security of the PRNG, secure channels between devices and an authenticated encryption scheme that guarantees IND-CPA and INT-CTXT.
Intuitively, the security of the heartbeat protocol is achieved
by using an interactive protocol that requires the receiving device to
prove its knowledge about the current heartbeat to the sending device.
Only then, the next heartbeat is exchanged.
This active participation makes it impossible
for offline devices to follow the continuous `stream' of heartbeats.
Proof Sketch - Heartbeat
We observe that no two heartbeats are linked.
Hence, it is impossible to derive any _t
from _1,_2,…,_t-1 without breaking the security
of the PRNG. Moreover, assuming synchronized clocks, every healthy device stores
at most two heartbeats in any time period t, namely _t-1,
_t or _t,_t+1. When compromising a single device in time
period t and assuming an attack time of ≥ 2 ·,
the attack will be successful not earlier than in time period t+2.
The TEE of the compromised device will then leak
at most heartbeat _t+1, but no later heartbeats, as
these are not present in the TEE.
We observe that with any attack time < 2 ·, would be able
to compromise a device without missing a single heartbeat period, and thus
render the protocol insecure.
We show that is unable to gain access to the current heartbeat
by interacting with healthy devices without breaking the security
of the authenticated encryption scheme. During the heartbeat exchange,
all messages sent between two devices i and j are encrypted with a session
key that is the XOR of the pairwise channel key _ij and the current heartbeat _t at time t.
Thus, the session key is only known to i and j at time t.
We observe that with access to only one (or none)
of the two keys, is unable to create or to decrypt a message that
is accepted by i or j without breaking the INT-CTXT and
IND-CPA security of the encryption scheme. Hence, even when compromising
further devices and extracting (past) heartbeats, is unable to
decrypt any past or future communication between i and j, as is missing the pairwise channel key _ij. Similarly, after compromising
a device and gaining access to all channel keys, is still
missing the current heartbeat to construct the session key, required to
interact with neighboring devices.
The same arguments hold for all messages sent between devices in the
aggregation protocol, since they are all encrypted using the pairwise session key.
Proof Sketch - Attestation
The attest of a single device i is
the encryption of the timestamp ts, issued by 𝒪, under i's
device key _i. Thus, is only able to forge an attest for
a healthy i with non-negligible probability when being able to
break the IND-CPA security of the encryption scheme.
Yet, to win , has to report at least n/2 (informative) or
n (non-informative) valid attests, while being allowed to only compromise up to c < n/2
or c<n devices. Consequently, since is unable to forge an attest for a healthy
device with non-negligible probability, has to merge the attests of compromised
devices with attests created by healthy devices.
During the actual attestation protocol, two cases can be distinguished.
First, the device
i that 𝒪 approaches for the attestation
is compromised. In this case, can create an attestation report
for all compromised device. However, without access to a valid heartbeat
and thus session key,
can only create a valid attestation request message _att
with non-negligible probability, when breaking the INT-CTXT security
of the encryption scheme. Hence, no healthy device will contribute an
attest.
Similar, in the second case, where 𝒪 first approaches a
healthy device, is, for the same argument as described above,
unable to decipher or induce any message in the
attestation protocol between healthy device.
Furthermore, the security of a XOR aggregation scheme, as used here,
is shown in <cit.> and consequently,
is non-informative secure, when only accepting
a complete aggregation report that includes the attests of all devices.
Furthermore, it is informative secure, when accepting
reports with at least n/2 attests,
because attests can be attributed towards their device id.
Finally, we remark that the `honest majority' assumption c<n/2 is required, as
otherwise a dishonest majority could fake a healthy systems state.
§ PROTOCOL EXTENSIONS
In the following, we present three significant extensions to . First, we make the heartbeat transmission phase more robust against failures ( <ref>). Next, we extend to verify the integrity of the software on all devices in the network ( <ref>). Finally, we propose an extension that allows efficient attestation in highly dynamic and disruptive network topologies ( <ref>).
§.§ Leader Election Protocol
The leader election phase extends the heartbeat transmission phase, to make it more robust against failures. In particular, devices that fail to receive the current heartbeat elect a new leader device that takes over the tasks of the previous leader, i.e., the periodic emission of a new heartbeat. In this way, the heartbeat protocol is able to recover from device outages, network partitioning, or targeted denial of service attacks.
The leader election protocol is initiated by every device that fails to receive the heartbeat within a time _hb that is shorter than the heartbeat period (_hb < ). Devices execute the leader election protocol inside their TEE and use the remaining leader election time _le = - _hb to determine the device with the smallest id, which then becomes the new leader device
(bully algorithm). For this purpose, devices initially generate their own heartbeat and then announce this heartbeat together with their device id to all neighboring devices. Devices store the smallest device id that they received in the leader election phase, including the corresponding heartbeat. Whenever a device updates its smallest received id and heartbeat, it broadcasts both to their neighboring devices. Thus, the new smallest id and heartbeat are quickly propagated in the network. A device recognizes itself as the new leader device, if it only receives messages from devices with higher device ids. Note that the original leader has the smallest id in the entire network, hence, the protocol also tolerates a return of the original leader. In Appendix <ref>, we formalize the leader election protocol, describe it in more detail, and demonstrate its security.
§.§ Attestation of Software Integrity
In order to attest the correct and safe operation of all devices in the network, it is crucial to ensure that devices are in a trustworthy software state, free from malicious or broken software. For this purpose, we propose that the network operator 𝒪 defines a set of trustworthy software states tss in the attestation request, when initiating an attestation of the network. Tss specifies all software configurations that are permitted by 𝒪, e.g., because they represent the correct and most recent software states. When devices perform the attestation protocol, they invoke the execution of a software integrity measurement function in their TEE. This function measures the integrity of installed software and compares these measurements to the reference values specified in tss. In this way, each device determines whether it is in a trustworthy or untrustworthy software state. Devices being in an untrustworthy software state immediately abort the attestation phase and instead execute a recovery routine that allows the device to restore a trustworthy software state, e.g., via secure code updates <cit.>. Since untrustworthy devices do not participate in the execution of the attestation protocol, 𝒪 receives a report which exclusively contains devices that are in a trustworthy software and uncompromised hardware state. In Appendix <ref>, we extensively explain changes that need to be done to the enrollment phase and the attestation protocol to enable such a hybrid attestation. Furthermore, we discuss the security of the extension.
§.§ Attestation of Dynamic Networks
Approach
The attestation protocol in ( <ref>) arranges a spanning tree, which allows for an efficient aggregation and transmission of the attestation report to the network operator 𝒪. This approach works efficiently as long as the network topology stays static during attestation, for instance, as devices in the network only move as a whole (herd mobility) or within local limits (micro-mobility). However, in dynamic network topologies with highly mobile devices and frequent link disruptions, it is impractical to maintain a spanning tree topology. In such networks, communication with a parent device could introduce a significant delay
or become highly inefficient, as the parent device could move away. Even worse, the parent device may be temporarily out of range and thus be disconnected from the network.
Therefore, instead of routing the attestation along a virtual topology, we propose a distributed (greedy) aggregation, where attestation reports are collected and aggregated by all devices in the network. Thus, after 𝒪 initiates the attestation protocol, each device first generates its own attestation report, stores this report, and broadcasts it to all neighboring devices. When a device receives an attestation report, it merges this report with its stored report. On observing new attests, the device broadcasts the updated report to all its neighboring devices. In this way, all devices in the network eventually store the same attestation report and 𝒪 can obtain the attestation result from an arbitrary device in the network.
To reduce the communication complexity, an aggregation scheme for the above mentioned approach must allow to merge multiple reports with intersecting attests into one. This requirement renders the aggregation function described in <ref> inapplicable, because its XOR operation risks the removal of intersections of attests from the aggregate.
Because of this and following the analysis of aggregation protocols in <ref>, we present a novel aggregation scheme for dynamic networks that is particularly tailored to the application scenario.
Secure & Efficient Attestation Report Aggregation
The here proposed scheme achieves statistical security and is slightly less powerful
than the spanning tree aggregation scheme, as it allows an adversary to
compromise at most c < n/2 - s devices, with 2^-s being the statistical
security level.
In our scheme, an attestation report also consists of two parts, namely
the device description and the secure aggregate itself.
The device description is a n-bit vector where a bit is set for every
device included in the aggregate. The aggregate consists of an
n_s =(n+s)-bit vector, where a single bit indicates the attest of a
device. A device i that receives an attestation request with
timestamp ts, creates its own attest using a collision
resistant cryptographic hash function by computing
a = (dk_i || ts) in its TEE. Subsequently, i
sets a bit at position i in the device identifier as well as
a bit at offset compress(a) in the secure aggregate, where
compress is a function that reduces the hash value to a value of
length n_s bits. Note that compress does not need to be
cryptographically secure, but it should achieve a close to uniform
output distribution for uniformly distributed input.
All other bits in both vectors are set to 0.
In order to merge multiple attestation reports, a device
computes the bit-wise OR of all attestation reports. This can be done
very efficiently and allows to aggregate reports with intersections of
devices.
Both the secure aggregate and the list of device identifier could be
compressed, for instance, by using a run-length encoding. Nevertheless,
even without compression, a very short attestation report
is achieved with a length of only 2n+s bits, e.g., 266 bytes for
1000 devices and a security level of s=128 bit, which is a significant
improvement over a naïve concatenation of attests that requires more
than 16k bytes. Even though, has a good chance to guess
a small number of attests correctly, the security of the scheme is based
on the hardness to guess (at least) s attests correctly.
A detailed security analysis of this scheme is given in
Appendix <ref>.
§ EVALUATION
Next, we evaluate ( <ref>) and its three protocol extensions ( <ref>). In <ref>, we describe our setup, give details of the implementation, and present our measurements. Then, we report on our network simulation results for both static ( <ref>) and dynamic network topologies ( <ref>).
§.§ Implementation & Measurements
Setup
We implemented our protocol on Stellaris EK-LM4F120XL microcontrollers. The Stellaris is a low-cost embedded system from Texas Instrument which features an 80 MHz ARM Cortex-M4F microprocessor and 256 kB of Flash memory. To enable wireless mesh connectivity based on the ZigBee standard, we equipped the Stellaris microcontrollers with Anaren's CC2530 BoosterPacks.
Cryptographic Runtime Measurements
We implemented the hash function using SHA-512 and employed AES in Galois/Counter Mode (AES-GCM) as an authenticated encryption scheme. For the key exchange, we used Elliptic Curve Diffie-Hellman (ECDH) with Curve25519 <cit.>. Table 2 shows an excerpt of our cryptographic runtime measurements on the Stellaris microcontroller.
We would like to stress that our implementation is based on platform independent and unoptimized C code.[We used SUPERCOP's ed25519 implementation (<https://ed25519.cr.yp.to/software.html>) and SharedAES-GCM (<https://github.com/mko-x/SharedAES-GCM>).] Assembler optimized code for low-end embedded systems can improve the performance of cryptographic operations by orders of magnitudes <cit.>. We presume that similar performance improvements are also possible on the Stellaris.
Network Runtime Measurements
For unicast messages between two neighboring devices in the mesh network, we measured an average throughput of 35.0 kbps on the application layer. Although the theoretical maximum throughput in ZigBee networks is 250 kbps, other performance evaluations revealed similar performance losses in reality <cit.>. In addition, we measured an average end-to-end delay between two neighboring devices of 13.5 ms with the smallest message size and 18.5 ms with the biggest message size.
Memory Costs
In our implementation, devices store their own ECDH key pair (64 bytes), the current and the next heartbeat (each 16 bytes), the leader device id (4 bytes), k secure channel keys and device ids (each 20 bytes), and a timestamp (4 bytes). The number k of stored secure channel keys can be adjusted to the particular memory requirements, since devices can establish channel keys right away by performing an ECDH key exchange with a neighboring device ( <ref>). Additionally, devices need to temporarily store data: the public key of a neighboring device (32 bytes) during key exchange, a session key (16 bytes) during heartbeat transmission, and the attestation report during attestation. The size of the attestation report is dependent on the number n of devices and the usage of the dynamic network extension. If it is used with a security level of 128 bit, the report amounts to n/4 + 16 bytes, if not, to n/8 + 16 bytes. However, as reports can be compressed using run-length encoding, their actual size is much lower for most devices in the network. In total, devices require 104 + k · 20 bytes of permanent storage and at most n/4 + 16 bytes of temporary storage.
§.§ Simulation Results for Static Networks
Setup.
We first evaluated our protocol in static network topologies, where all devices are connected and stationary. Thus, there are no link breaks or abrupt delays in the network communication. We used <cit.> to simulate a homogeneous network with ten to multiple million Stellaris devices. Following the typical evaluation methodology in scalable attestation protocols <cit.>, we implemented our protocol on the application layer and used computational and network delays based on our measurements (see <ref>).
Heartbeat Protocol Runtime
We simulated the runtime of the heartbeat protocol in various topologies. Figure <ref> shows the runtime for a binary, 4-ary, and 8-ary tree topology with up to 550.000 devices, where the heartbeat leader device is located at the root of the tree. The figure demonstrates that in tree network topologies, protocol runtime increases logarithmically with the number of devices in the network. Under these conditions, the heartbeat protocol achieves an outstanding performance, requiring less than 2.3 seconds to reach 500.000 devices in an 8-ary-tree and less than 1.7 seconds in a binary tree topology. Even with multiple million devices, runtime remains below 2 seconds in a binary-ary tree topology. Only the first run of the heartbeat protocol in the network requires little more time, since neighboring devices initially need to exchange public keys and perform key exchanges to establish shared secrets. Yet, even with the additional key exchanges runtime remains below 5.1 seconds for more than 500.000 devices.
Attestation Protocol Runtime
We configured the attestation protocol to use the software attestation extension ( <ref>) and thus to attest the hardware and software state of all devices in the network. To verify the integrity of installed software, devices compute a SHA512 digest over a 30 kB software and compare the digest to an expected value that is specified in the attestation request.
For attestation we used the spanning tree attestation approach ( <ref>).
Figure <ref> shows the runtime for a binary and 8-ary tree topology with up to 550.000 devices, where the network operator is located at the root of the tree. Additionally, we varied the type of the attestation report, containing either the precise ids of healthy devices (solid lines) or the state of the overall network (dashed lines). The figure demonstrates that reporting precise device identifier introduces a notable overhead. When reporting the overall network state, attestation runtime increases barely with the number of devices in the network, remaining below 2 seconds even for networks with multiple million devices in almost any tree topology. Yet, when reporting precise device ids, runtime increases to more than 152 seconds for 500.000 devices due to the large size of the attestation report, which increases proportionally with the network size. Nevertheless, we consider that 2.5 minutes is an acceptable timeframe to obtain a report that precisely lists which devices
are in a compromised state.
Communication Costs
During heartbeat transmission, all devices, except for the leader device, receive msg_new (1 byte), send msg_req (17 bytes), and receive msg_hb (17 bytes) to obtain the newest heartbeat, using a one byte message identifier.
If devices need to (re-)establish a secure channel key, they need to mutually exchange their public keys, which causes an additional message overhead of 32 bytes. For instance, in a binary tree topology, devices transmit in total 104 bytes, or 296 bytes with the initial key exchange, in each heartbeat transmission period.
During the execution of the attestation protocol, all devices receive one msg_V (17 bytes) or msg_att (17 bytes).
Also, devices send a msg_att to all neighbor devices that have not yet received msg_att and afterwards receive a msg_agg from them (≤ n/8 + 16 bytes). If the device's software integrity is attested ( <ref>), msg_V and msg_att contain the set of trustworthy software states tss, in our evaluation a 64 bytes hash digest. In short, assuming a binary tree topology and n = 1000 devices, during a run of the attestation protocol, each non-leaf device transmits at most 666 bytes and each leaf device 222 bytes.
Summary
We demonstrated that our protocol is highly efficient in static network topologies. In comparison to the previously best attestation protocol that is secure against physical attacks <cit.>, we reduce the number of transmitted messages per time period from 𝒪(n^2) to 𝒪(n).
To illustrate this advantage, in binary-tree topologies our approach is 27 times faster with 2000 devices and 3800 times faster with 500,000 devices when interpolating their results. The comparison already considers the fastest variant presented in <cit.>, which requires each device to store and manage n symmetric keys. In our protocol, devices must only store the keys of neighboring devices, e.g., 3 in a binary tree topology.
When attesting the state of the entire network, both protocols (<cit.> and ) show a runtime that scales logarithmically with n. Nevertheless, in contrast to <cit.>, also allows to determine the ids of compromised devices with low overhead even in larger networks.
§.§ Simulation Results for Dynamic Networks
Setup
We further evaluated our protocol in highly dynamic and disruptive networks to investigate its robustness in complex scenarios. To model device mobility, we randomly deployed devices in a 1000m x 1000m square area and applied a random waypoint mobility model, which is commonly used in literature on absence detection <cit.>. Consequently, each device repeatedly selects a random speed as well as a random destination within the area and then moves towards the destination at the selected speed. The random device movement causes the network to be constantly partitioned, especially for sparse networks. In order to investigate effects like link disruptions, varying network delays, and signal interference that emerge due to the movement of devices, we modeled an 802.15.4 physical and medium access control layer using the module. Modeling both layers as well as device mobility requires a lot of computational power. This is a known issue in MANET simulations, which leads to huge simulation runtimes <cit.>. For these reasons, we were only able to run simulations with a few hundred devices. Nevertheless, as we will show in this section, the main hurdle of our protocol is to perform well in sparse networks. Scalability of our approach in dense networks, where all devices are permanently interconnected, is shown in the previous section. In addition to the above mentioned simulation parameters, we set the wireless communication range to 50m (50% of the distance specified in the ZigBee standard), the device speed to a random value between 5 and 15 m/s, and the heartbeat as well as the leader election period to 2.5 minutes (detecting physical attacks that require more than 10 minutes).
Heartbeat Protocol Robustness
We investigated the robustness of the heartbeat protocol in worst cases, which are highly dynamic and disruptive network topologies. In particular, we examined the time until the protocol produces false positives, i.e., healthy devices that are regarded as physically compromised, because they did not receive the heartbeat on time. Figure <ref> illustrates the average runtime of the heartbeat protocol until a certain amount of false positives occur. The figure shows that the number of devices in the network has a vital influence on the robustness of the heartbeat protocol. Since devices move completely at random, the network must be sufficiently dense so that devices meet each other frequently enough to exchange the newest heartbeat on time. In fact, there is an exponential correlation between robustness and device density, which causes the average error-free heartbeat runtime to quickly increase from 2.4 days for 60 devices to weeks with more than 90 devices. To illustrate the sparseness in this scenario, 60 statically connected devices could cover at the maximum 29% of the area and 90 devices 43.4%. Nevertheless, as shown by the boxplot, the runtime between multiple simulation results differ widely. This makes it hard to guarantee robustness for sparse network scenarios. Investigating the false negatives, we identified the main cause in the random movement of devices. Commonly, a single device hides away, i.e., does not encounter other devices, and thus has no chance to receive the newest heartbeat on time. This cannot be prevented by faster computations or smaller communication delays in our protocol, but only be increasing the duration of the heartbeat phase.
We also observed that this hiding of a single device has barely any cascading effect on other devices. Hence, as shown in Figure <ref>, if tolerating a minimal amount of false positives, significant longer protocol runtimes are possible.
Next, we analyzed the effectiveness of the leader election extension by simulating an outage of the heartbeat leader device. Figure <ref> shows the largest fraction of devices that agreed on a common new leader with an increasing time interval for the leader election phase. It again illustrates the importance of the network density. In dense networks, leader election information can spread faster and thus reach more devices in shorter time. Nevertheless, even in relatively sparse networks with 60 devices, a time interval of 150 seconds is on average sufficient to let all functioning devices agree upon a new common leader. This also highlights our robustness against targeted DoS attacks, where an adversary attempts to disrupt the heartbeat protocol by breaking the heartbeat leader device.
Attestation Protocol Runtime
For the evaluation of the attestation protocol in dynamic networks, we used our dynamic attestation extension ( <ref>) with a statistical security level s of 128 bits. Figure <ref> shows a boxplot of the elapsed time between the emission of the attestation request and the moment when all devices in the network store the final attestation result, i.e., a report that contains all devices, for an increasing number of devices in the network. With an increasing network density, attestation reports spread faster and the overall attestation protocol runtime decreases. However, this effect is not as distinct as with the heartbeat protocol, where we observed an exponential correlation between protocol performance and network density. This is because, in contrast to the heartbeat protocol, where a single message is flooded in the network, each device must contribute with a message, i.e., its individual attestation report, to a global attestation aggregate. Furthermore, the size of the attestation report increases proportional with the network size, though, being reasonably small for common network sizes (e.g., 2.5kB for 10,000 devices). Figure <ref> also shows that the runtime of the attestation protocol varies little. This guarantees that the final result is with high probability reached within a certain time frame, e.g., 5 minutes for 100 devices.
Communication Costs
Message costs in dynamic network topologies are, except for the attestation report, the same as in static network topologies ( <ref>). However, due to link failures, some messages are transmitted more often in dynamic topologies. In our simulations, we varied the network size between 40 and 100 devices and let devices actively poll their neighbors for the newest heartbeat after 10 seconds. Our results revealed that, depending on the network size, each device sends on average 19.4 to 21.0 msg_new (poll heartbeat) and 1.04 to 1.12 msg_req as well as msg_hb messages. Hence, in total, devices transmit on average 114 bytes in each heartbeat transmission phase. Compared to static network topologies ( <ref>), this is less than 10% communication cost overhead.
Nevertheless, because the attestation result is distributed to all devices, the actual attestation consumes considerably more communication in disruptive networks. Conducting the same simulations as described above, we observed that each device exchanges on average 12.4 attestation reports in networks with 40 devices, 16.0 with 60 devices, 18.9 with 80 devices, and 21.5 with 100 devices. Each exchange requires a device to send one msg_att and receive one msg_agg. Note that the dynamic attestation report in msg_agg has a size of at most n/4 + 16 bytes. Thus, in total, devices transmit on average 1375 bytes in networks with 100 devices, which is 4.2 times more than in static network topologies.
Summary
We showed that our heartbeat and attestation protocols are robust and efficient, even in highly partitioned and unpredictably changing network topologies. In fact in an exemplary low connectivity scenario, with a maximum possible area coverage of 43% for randomly moving devices, the heartbeat protocol still runs on average 65 days without producing a single false positive with =10 minutes. We further illustrated the effectiveness of the leader election protocol, by completely recovering networks from device outages in less than 130 seconds in the same setting. Finally, we demonstrated the robustness of our attestation protocol in dynamic networks and showed that its performance is dominated by network connectivity as opposed to the protocol's communication complexity.
§ CONCLUSION & FUTURE WORK
We presented the first scalable attestation protocol for mesh networked embedded devices that is resilient to physical attacks. Compared to existing solutions, our protocol reduces the number of transmitted messages per time period from 𝒪(n^2) to 𝒪(n), thus scaling to millions of devices and outperforming existing solutions by orders of magnitude. In addition to attesting the overall state of the network, is able to precisely identify devices that run compromised software or have been physically manipulated.
We demonstrated that our protocol is robust and efficient, even in very dynamic topologies, as it can perform an attestation or recover from device outages within minutes.
In future work we plan to investigate our protocol in specific network application scenarios, such as drone-based delivery systems or wireless sensor networks. Moreover, we want to make use of MANET simulators that are optimized for scalability and/or parallelism, in order to be able to simulate thousands of moving devices.
abbrv
§ PROTOCOL EXTENSIONS
§.§ Leader Election Protocol
Heartbeat Transmission Extension
The leader election protocol is shown in Figure <ref> and extends the heartbeat transmission phase described in <ref>. We henceforth divide each time period T_1, T_2, T_3, ... of length in two phases: the heartbeat phase, whose length is _hb (formerly ), and the leader election phase, whose length is _le = - _hb. Furthermore, we assume that the function (t) returns the constant if T_t ≤ T_clock < T_t + _hb, the constant if T_t + _hb≤ T_clock < T_t+1, and otherwise.
Every device i that did not receive a heartbeat within _hb, indicated by t, will generate its own heartbeat ^i, set the current leader device id to its own id (min^i←i), and update its time pointer by one. In a next step i informs its neighbors about the new heartbeat with a message _le.
Two devices that already initialized the leader election phase negotiate the heartbeat as follows.
First, a leader election request message _le_req is generated by
j that contains a session key to secure the remaining communication.
Then, i sends the smallest received device id min^i and the
corresponding heartbeat to j. Initially these are i's own id and generated heartbeat.
Device j will then compare its previous smallest id min^j
with the just received id.
If min^i < min^j, j will update min^j to min^i
and set ^j to ^i.
Finally, j will inform i of the result of the comparison,
which is also stored by i.
Both devices will then continue to further broadcast
the new heartbeat. The protocol terminates
implicitly, once the smallest device id has been identified.
We note that a leader, who is absent
during the heartbeat phase, can rejoin by participating in the leader election phase.
In <ref>, we analyze the effectiveness of the
leader election protocol.
Security
The leader election protocol uses the same two-key mechanism, i.e., the session key constructed by heartbeat and channel key, as the original heartbeat protocol to secure all messages. This makes it impossible for an adversary to synthesize or to decrypt a message that is accepted or sent by healthy devices. Otherwise, could break the IND-CTXT or IND-CPA security of the encryption scheme. Hence, the actual leader election process can only be hindered, yet not controlled by .
§.§ Attestation of Software Integrity
To achieve a secure attestation of hardware and software the following extensions
to are required:
Enrollment Phase Extension
In the enrollment phase, the network operator selects an arbitrary software integrity measurement function and stores its implementation in the TEE of each device 1, ..., n in the network. Traditionally, these mechanisms measure the integrity of a software by computing a hash value over its binary code <cit.>, though recent approaches are also able to measure the runtime behavior of a software, for the purpose of detecting sophisticated code-reuse attacks <cit.>. In the following, we abstract from these implementation details and use as a black box that takes an input i, e.g., a description of what to measure, and generates a measurement m, which represents the current software state of a device (i: ).
Attestation Protocol Extension
Before invoking the attestation protocol, 𝒪 specifies a set of trustworthy software states tss. Tss consists of multiple (input, measurement)-pairs (tss = {(i_1, m_1), (i_2, m_2), ..., (i_x,m_x)}) and a description which network device should use which input (e.g., devices from type 1 should use i_1, etc.). A pair (i_k, m_k) in tss indicates that the expected measurement for the input i_k is m_k (). In this way, tss specifies all measurements that are permitted by 𝒪, e.g., because they represent the correct and most recent software states.
During the execution of the attestation protocol, tss is distributed to all devices in the network. For this purpose, 𝒪 initially incorporates tss into _ (_ ← _itsntss). In a similar way, by incorporating tss into _att (_att←_curtstss), devices forward tss to neighboring devices. Afterwards, each device i measures its local software configuration by extracting its appropriate (i_k, m_k) pair and executing the measurement function with the input i_k in its TEE. Subsequently, i checks whether the output generated by matches m_k and if this is the case continues with the execution of the attestation protocol, as explained in Section <ref>. If both values do not match, i invokes a recovery routine, which allows the device to restore to a trustworthy state by performing a secure code update protocol with 𝒪. Executing this extended attestation protocol, 𝒪 receives a msg_res that only contains ids of devices that are in a trustworthy software and uncompromised hardware state. Note that the protocol could easily be further extended to precisely report devices which are in an untrustworthy software but uncompromised hardware state, e.g., by introducing an additional _agg_s and _res_s that is specifically generated and aggregated by untrustworthy devices and transmitted to 𝒪.
Security
The security of the protocol extension results from the security of the main protocol ( <ref>), the secure hardware properties ( <ref>), and the adversary model ( <ref>). Since the protocol extension is executed in the TEE of devices, malware is unable to tamper with the protocol code, execution, or any stored protocol data (e.g., secret keys). Thus, an adversary, who compromises the software of a device, is only able to prevent protocol execution or manipulate the input/output to/from the protocol. However, preventing protocol execution has no influence, since untrustworthy devices stop executing the attestation protocol, anyway. Manipulating the input or output to or from the protocol has no affect, as all inputs and outputs are secured using authenticated encryption with secrets that are only accessible within the TEE. Additionally, all inputs and outputs are dependent on a session-specific timestamp ts issued by 𝒪. Therefore, replay attacks are likewise worthless. These measures also prevent Dolev-Yao network adversaries from compromising security. By contrast, a physical attacker is able to tamper with the protocol code, data, or execution. However, as explained in the security analysis of the main protocol ( <ref>), a physical attacker is unable to obtain the current heartbeat , which is required to participate in the attestation protocol or heartbeat protocol.
§.§ Efficient Attestation Report Aggregation
Security As already shown in the security analysis of the
aggregation protocol ( <ref>), is unable
to exchange any message with healthy devices during attestation.
This argument also holds for the efficient aggregation scheme, as only
the aggregation inside the TEE is modified and not the protocol itself.
Consequently, the security of the efficient aggregation scheme, depends on the
hardness of attestation report itself.
An attestation report is accepted
if at least n / 2 valid attests are contained in the report. By assumption
is only allowed to compromise up to c < n/2 - s devices and thus, can only
compute up to c valid attests. The remaining n/2 - c attests have to be
guessed by . The security of our aggregation scheme is formalized in
Theorem <ref>.
Assuming collision resistance of the hash function, any PPT ,
compromising up to c < n/2 - s devices can successfully forge
an efficient attestation report that is accepted by 𝒪 with
probability of at most 2^-s for any n>2· s.
Proof Sketch The attestation report consists of two bit vectors,
the first vector annotates the
devices included in the network and the second vector annotates the actual
attests (each attest is a single bit in the attest vector).
To successfully include one additional attest into the report,
has to set an additional bits in the device vector and to guess
the correct bit in the attest vector. A single mismatch between the
aggregate computed by 𝒪 and the reported aggregate
results in a reject of the attestation report.
As the position of an attest bit for a single device is computed by
((_its)), we
observe that could break the collision resistance
of the hash function, if would achieve non-negligible advantage
in guessing an attest bit correctly without access to the device key.
Assuming a uniform distribution of the attestation bit,
will guess its position correctly with probability 1/n_s.
However, can follow a better strategy than randomly guessing all positions
of the n/2-c bits that are required for a valid attestation report.
We note that due to the relatively small set of bit positions, collisions
between multiple devices are likely. The best strategy the can follow
is thus, to guess collisions with the c bits that can set correctly
in the attest vector.
A collision with any of the attest bits occurs with probability of at most
c/n_s (collisions within the attests of compromised devices are
also possible).
With this strategy, can achieve a winning probability of
at most (c/n_s)^n/2-c. We observe that
c/n_s = n/2-s/n+s≤1/2
and by assumption n/2 - c ≥ s and thus,
wins the game with probability of less than 2^-s.
We remark that for the sake of technical simplicity of the proof,
the attestation vector is set to a fixed length n_s = n + s.
This is required to make it a hard task for to guess the zero bits
in the attest vector for smaller n, when setting all bits in the device
vector. For lager n, n_s could be chosen smaller than n+s.
|
http://arxiv.org/abs/1702.02020v3 | 20170127081446 | Einstein's Geometrical Versus Feynman's Quantum-Field Approaches to Gravity Physics: Testing by Modern Multimessenger Astronomy | [
"Yurij Baryshev"
] | physics.gen-ph | [
"physics.gen-ph"
] |
ssect@ltx
toc#1#8
./Definitions/
|
http://arxiv.org/abs/1701.07942v3 | 20170127044149 | Seiberg-Witten monopoles with multiple spinors on a surface times a circle | [
"Aleksander Doan"
] | math.DG | [
"math.DG"
] |
Seiberg–Witten monopoles with multiple spinors on S^1×Σ]Seiberg–Witten monopoles with multiple spinors on a surface times a circle
The Seiberg–Witten equation with multiple spinors generalises the classical Seiberg–Witten equation in dimension three. In contrast to the classical case, the moduli space of solutions can be non-compact due to the appearance of so-called Fueter sections. In the absence of Fueter sections we define a signed count of points in and show its invariance under small perturbations. We then study the equation on the product of a Riemann surface and a circle, describing in terms of holomorphic data over the surface. We define analytic and algebro-geometric compactifications of , and construct a homeomorphism between them. For a generic choice of circle-invariant parameters of the equation, Fueter sections do not appear and is a compact Kähler manifold. After a perturbation it splits into isolated points which can be counted with signs, yielding a number independent of the initial choice of the parameters. We compute this number for surfaces of low genus.
[
Aleksander Doan
December 30, 2023
=====================
§ INTRODUCTION
This article addresses the questions of transversality and compactness for the moduli spaces of Seiberg–Witten multi-monopoles: solutions of the Seiberg–Witten equation with multiple spinors introduced in <cit.>. We begin by considering the equation on an arbitrary closed Riemannian spin three-manifold Y but the majority of the paper concerns three-manifolds of the form Y = S^1 ×Σ for a closed Riemann surface Σ.
§.§ Instantons in higher dimensions
The motivation for studying generalised Seiberg–Witten equations comes from higher-dimensional gauge theory. We only briefly outline this relationship as it will not be essential for understanding the results presented here. Donaldson, Thomas, and Segal initiated a programme of defining an enumerative invariant of Riemannian 7–manifolds with holonomy contained in the exceptional Lie group G_2 <cit.>. The putative invariant counts, in the sense of Fredholm differential topology, G_2–instantons: a class of Yang–Mills connections solving a 7–dimensional analogue of the anti-self-duality equation. A sequence of G_2–instantons can concentrate along a three-dimensional associative submanifold Y, developing a bubble singularity in a way familiar from the four-dimensional theory <cit.>. This phenomenon causes the moduli space of G_2–instantons to be non-compact. In particular, for a family (g_t)_t∈[0,1] of G_2–metrics, the cobordism between the moduli spaces of G_2–instantons with respect to g_0 and g_1, given by the one-parameter moduli space, may fail to be compact. If this is the case, the count of G_2–instantons with respect to g_t jumps as t varies, and is not a deformation invariant. Haydys and Walpuski <cit.> proposed to compensate for such jumps by adding the number of Seiberg–Witten multi-monopoles on Y, counted with signs. This number itself is not a topological invariant of Y; it can jump when the Riemannian metric on Y and other parameters of the equation are deformed. The point is that such jumps should occur exactly when a G_2–instanton is created or destroyed along Y. It is conjectured that a combination of the two numbers: the count of G_2–instantons and that of multi-monopoles summed over all associative submanifolds is invariant under deformations.
The case Y = S^1 ×Σ discussed in this article is particularly relevant to G_2–manifolds of the form S^1 × X for a Calabi–Yau three-fold X containing Σ as a holomorphic curve. G_2–instantons over S^1 × X correspond to Hermitian–Yang–Mills connections on X and so one expects that there is a relationship between multi-monopoles on S^1 ×Σ and Hermitian–Yang–Mills connections on X whose energy is highly concentrated around Σ. From this perspective, our work can be seen as the first step towards a gauge-theoretic interpretation of local Donaldson–Thomas invariants in algebraic geometry <cit.>.
§.§ Multi-monopoles on three-manifolds
Let Y be a closed three-manifold and let E → Y and L → Y be vector bundles with the structure group (n) and (1) respectively. Fix an (n)–connection B on E. A spin structure on Y and a choice of a Riemannian metric endow Y with the spinor bundle S: a rank two Hermitian bundle with trivial determinant line bundle and a compatible connection. The Seiberg–Witten equation with n spinors is the following differential equation for a pair (A,Ψ) of a connection on L and a section of Hom(E, S ⊗ L):
{[ D_ABΨ = 0,; F_A = ΨΨ^* - 1/2 | Ψ |^2. ].
Here, _AB is the Dirac operator on (E, S ⊗ L) twisted by the connections A and B, F_A is the curvature two-form, and in the second equation we identify imaginary-valued two-forms with traceless skew-Hermitian endomorphisms of S using the Clifford multiplication.
For n=1 this is the standard Seiberg–Witten equation in dimension three. As in the classical setting, one introduces the moduli space of gauge-equivalence classes of solutions which depends on the choice of the parameters of the equation: the Riemannian metric on Y, the connection B, and a closed two-form used to perturb the equation to guarantee transversality.
Denote by the space of all parameters of the equation as described above and for ∈ let () be the corresponding moduli space of solutions to the Seiberg–Witten equation with n spinors; see section <ref>, in particular Definition <ref>, for more details.
The dependence on does not play an important role for n=1.
In that case, for a generic[By generic we mean: from a residual subset of the space of objects in question. A subset of a topological space is residual if it contains a countable intersection of open and dense subsets. Baire’s theorem asserts that a residual subset of a complete metric space is dense. Unless said otherwise, we use the C^∞–topology. ] choice of , the moduli space of irreducible solutions is an oriented, compact, zero-dimensional manifold, that is: a finite collection of points equipped with signs. If b_1(Y) > 1, there are no reducible solutions and the moduli spaces for two different choices of are connected through an oriented, compact, one-dimensional cobordism. As a consequence, the signed count of points in (), for any generic , is a topological invariant of Y <cit.>.
A new feature of the equation for n ≥ 2 is that () may be non-compact for some choices of . Building on work of Taubes <cit.>, Haydys and Walpuski <cit.> showed that a sequence of points in () which does not have any convergent subsequence can be rescaled so that it converges in an appropriate sense to a Fueter section, a section of a fibre bundle over Y satisfying a non-linear analogue of the Dirac equation. We review Haydys and Walpuski's compactness theorem in subsection <ref>; Fueter sections are introduced in Definition <ref>. An important point is that the differential equation obeyed by a Fueter section, just like the Seiberg–Witten equation itself, depends on the parameter .
As a result, Fueter sections may or may not exist depending on the choice of ∈.
Denote by ⊂ the set of all parameters ∈ for which the moduli space () is compact. Haydys and Walpuski's theorem and Corollary <ref> imply that contains an open neighbourhood of the set of all ∈ for which no Fueter sections exist.
Our first result, proved in section <ref>, shows that the existence of Fueter sections is the only obstruction to defining a signed count of multi-monopoles.
Let Y be a closed oriented spin three-manifold with b_1(Y) > 1. Fix vector bundles E→ Y and L→ Y with the structure group (n) and (1) respectively.
* There exists a locally constant function → such that if () is Zariski smooth with the obstruction bundle →(), then () equals the integral of the Euler class of over (); see Definitions <ref>, <ref>, and <ref>.
* For a generic ∈ the moduli space () is a compact, zero-dimensional manifold equipped with a natural orientation described in subsection <ref>. In particular, in this case () equals the signed count of points in ().
At present, little is known about the set and the existence of Fueter sections. The main difficulty in understanding Fueter sections lies in the fact that they are defined only on the complement of a singular set—which is known to be closed and of Hausdorff dimension at most one <cit.>—and thus not amenable to standard elliptic theory. Conjecturally, is open and dense in and its complement has codimension one <cit.>, a prediction strongly supported by recent work of Takahashi <cit.>. If this is the case, has multiple connected components on which the function → is constant. These components are separated by codimension one walls on which Fueter sections appear and the value of jumps as we cross one of them. It is exactly this jumping phenomenon that indicates a relationship between the enumerative theories for multi-monopoles and G_2–instantons.
In view of Theorem <ref>, the central problem in the study of multi-monopoles on three-manifolds—one of importance to the applications in higher-dimensional gauge theory—is that of the existence and properties of Fueter sections. We make some progress on this problem by describing the moduli spaces of multi-monopoles and Fueter sections for a particular class of parameters . In particular, we exhibit the first examples of Y and such that
* ∈ and () is non-empty, compact, and consists of irreducible solutions,
* ∉ and () contains sequences of solutions converging to a Fueter section,
* there exist Fueter sections whose singular sets are non-empty [In fact, more is true: these Fueter sections do not arise from everywhere defined harmonic spinors and so are examples of singular harmonic _2 spinors as in <cit.>.],
* there exist Fueter sections that do not appear in the compactification of ().
Since the first version of this paper appeared, Theorem <ref> has been used to study the wall-crossing phenomenon for multi-monopoles and to prove the existence of Fueter sections, for some choice of , on all closed spin three-manifolds with b_1 > 1 <cit.>.
§.§ Gauge theory on Riemann surfaces
The examples of Y and mentioned in the four bullet points above are constructed by means of dimensional reduction. We consider three-manifolds of the form Y=S^1×Σ equipped with a spin structure induced from a spin structure on Σ. The bundles E and L are assumed to be pulled-back from bundles on Σ, and in the space of all parameters of the equation we distinguish the subspace _Σ consisting of the parameters pulled-back from Σ; see Definition <ref>.
For n=1 the classical Seiberg–Witten equation on S^1 ×Σ reduces to the vortex equation on Σ and the moduli space of solutions is homeomorphic to the symmetric product of Σ <cit.>. Similarly, in section <ref> we prove that for ∈_Σ all irreducible solutions of the Seiberg–Witten equation with multiple spinors are pulled back from solutions of a generalised vortex equation on Σ. In fact, we prove this for a much broader class of three-dimensional Seiberg–Witten equations associated with representations; see Theorem <ref>.
In section <ref> we show that solutions of the dimensionally reduced equation correspond to triples of the form (ℒ, α, β) where ℒ→Σ is a holomorphic line bundle and α, β are holomorphic sections of certain holomorphic vector bundles over Σ twisted by ℒ. This is a version of the Hitchin–Kobayashi correspondence, following earlier work of Bryan and Wentworth <cit.>. We construct the moduli space _() of isomorphism classes of such triples. In section <ref> we introduce also its natural complex-geometric compactification _() which is a compact complex analytic space containing _ as a Zariski open dense subset. In parallel, guided by an enhanced version of the Haydys and Walpuski's compactness theorem <cit.>, we define also a gauge-theoretic compactification () of the Seiberg–Witten moduli space. In what follows we assume for simplicity that E = 2.
For every ∈_Σ there is a homeomorphism () →_() restricting to an isomorphism of real analytic spaces () →_().
The gauge-theoretic compactification () is a compact real analytic space containing () as a Zariski open dense subset.
It is desirable to define () and prove Corollary <ref> without the assumptions Y = S^1 ×Σ and ∈_Σ. Even though () is expected to be compact for a generic choice of ∈, one needs a good notion of a compactification in order to study generic one-parameter families of moduli spaces and the wall-crossing phenomenon <cit.>.
We hope that the construction of () and the analysis constituting the proof of Theorem <ref> offer some guidance in the study of compactness for arbitrary three-manifolds.
The holomorphic description of Fueter sections allows us to prove that a generic point of _Σ belongs to the set introduced earlier—this is the main result of sections <ref> and <ref>.
Let Σ be a closed spin surface of genus g(Σ)≥ 1. Suppose that Y= S^1×Σ is equipped with a spin structure induced from the spin structure on Σ, and that the bundles E and L are pulled-back from bundles over Σ. Let d = ⟨ c_1(L), [Σ] ⟩ be the degree of L.
For a generic choice of ∈_Σ there exist no Fueter sections with respect to and the moduli space () is a compact Kähler manifold of dimension
_() = g(Σ)-1 ± 2d,
where the sign depends on .
In this case, the signed count of Seiberg–Witten multi-monopoles is, up to a sign, the Euler characteristic of the moduli space:
() = (-1)^g(Σ)-1χ(()).
Moreover, () does not depend on the choice of a generic ∈_Σ.
We should stress that here is generic in _Σ (parameters pulled-back from Σ) but not in (all parameters on Y). In particular, () may have positive dimension even though the expected dimension is zero. In other words, the moduli space is Zariski smooth and obstructed as in Theorem <ref>. This is familiar from the study of gauge-theoretic invariants of complex surfaces <cit.>, <cit.>, <cit.>.
The main idea behind the proof of Theorem <ref> is to interpret the existence of Fueter sections as a Fredholm problem. One difficulty in doing so stems from the possible non-smoothness of the singular sets of Fueter sections. However, an argument using a Weitzenböck formula shows that in fact such a set is a finite collection of circles of the form S^1 ×{point}. We then show that these singularities can be removed—in an appropriate sense—so that every Fueter section gives rise to a globally defined holomorphic object. As a result, the deformation theory of Fueter section is described by a complex-linear Fredholm operator of non-positive index. Using this, we prove that Fueter sections appear in real codimension at least two and can be avoided in a generic one-dimensional family of parameters in _Σ.
Finally, in section <ref> we use complex-geometric methods and classical results on stable vector bundles on Riemann surfaces <cit.> to describe () and compute () in some cases. The generic properties of the moduli spaces are summarised in Theorem <ref>. We study also some non-generic cases which provide us with examples of Fueter sections and non-compact moduli spaces; see Examples <ref>, <ref>, Proposition <ref>, and Remark <ref>
For a genus two surface Σ the space of parameters _Σ contains a copy of ^3, thought of as the moduli space of semi-stable (2,)–bundles on Σ. A point in ^3 corresponds to a plane H_ in the dual projective space (^3)^*. Let T^4 / _2 be the singular Kummer surface in (^3)^*. For a generic choice of ∈^3, the plane H_ intersects T^4/_2 transversely and misses all of the 16 singular points of T^4/_2, so that (T^4/_2) ∩ H_ is a smooth genus three surface. The moduli space () is then the preimage of this intersection under the double covering T^4 → T^4/_2. It is a genus five Riemann surface and so () = 8. When is chosen so that H() passes through one of the singular points of T^4/_2, the moduli space is necessarily singular and non-compact.
§.§ Acknowledgements
The work presented here is part of my PhD thesis at Stony Brook University. I am most grateful to my advisor Simon Donaldson for his guidance, support, and encouragement. Discussions with Thomas Walpuski and Andriy Haydys have contributed greatly to this paper.
I thank also Aliakbar Daemi, Cristian Minoccheri, Vicente Muñoz, Benjamin Sibley, David Stapleton, Ryosuke Takahashi, Alex Waldron, Malik Younsi, and two anonymous referees for sharing their insights and helping me improve this article.
My research is supported by the https://sites.duke.edu/scshgap/Simons Collaboration on Special Holonomy in Geometry, Analysis, and Physics.
§.§ Notation and conventions
Λ^p Y = Λ^p T^*Y exterior product of the cotangent bundle
Γ(Y,V) or Γ(V) sections of a vector bundle V → Y
Ω^p(Y,V) or Ω^p(V) differential forms with values in V
(Y,V) or (V) connections on V compatible with the structure group
Cl(α) or α · Clifford multiplication by a form α∈Λ^*Y
⟨·, ·⟩ Euclidean inner product
(·, · ) Hermitian inner product
g(Σ) genus of a surface Σ
J^d Jacobian of degree d holomorphic line bundles
When it is not likely to cause confusion, we will use the same notation for a principal (n) or (n) bundle and the associated rank n Hermitian vector bundle. We use the following sign convention for the Clifford multiplication: under the identification of the spinor space of ^3 with the quaternions ℍ, with the complex structure given by right-multiplication by i, the action of e^1, e^2, e^3 is given by left-multiplication by i, j, k respectively.
§ SEIBERG–WITTEN MONOPOLES WITH MULTIPLE SPINORS
In this section we introduce the Seiberg–Witten equation with multiple spinors, discuss transversality and orientations, and review the compactness theorem of Haydys and Walpuski.
The main results are Propositions <ref> and <ref> which lead to the proof of Theorem <ref>.
§.§ The Seiberg–Witten equation with multiple spinors
Let (Y,g) be a compact, connected, oriented Riemannian three-manifold equipped with a spin structure. Let S → Y be the spinor bundle, E → Y an (n)–bundle and L → Y a (1)–bundle. Fix a connection B ∈(E) and a closed two-form η∈Ω^2(Y, i).
The Seiberg–Witten equation with multiple spinors with parameters (g,B,η)[We do not need to assume d η = 0 in order to write down equation (<ref>) but Lemma <ref> below and the Bianchi identity imply that there are no solutions unless dη = 0.] is the following differential equation for a pair (A,Ψ) ∈(L) ×Γ((E, S ⊗ L))
{[ _ABΨ = 0,; F_A = μ(Ψ) + η. ].
Here, F_A ∈Ω^2(i) is the curvature and μ is the quadratic map
μ(Ψ) := ( ΨΨ^* - 1/2 | Ψ |^2 id),
with Ψ^* ∈Γ((S⊗ L, E)) being the adjoint of Ψ.
Using the natural isomorphism L ⊗ L^* ≅, we consider the composition ΨΨ^* as a section of End(S).
It actually takes values in the subspace i 𝔰𝔲(S) of trace-free self-adjoint endomorphisms which in the equation we implicitly identify with i Λ^2 Y using the Clifford multiplication.
We will refer to solutions of the equation as Seiberg–Witten monopoles with multiple spinors or, following <cit.>, Seiberg–Witten multi-monopoles[The name multi-monopoles is often used in reference to solutions of the Bogomolny equation.].
The Seiberg–Witten equation with multiple spinors was introduced in <cit.> and <cit.>. If we trivialise E and represent Ψ by an n–tuple (Ψ_1, …, Ψ_n) with Ψ_i ∈Γ(S ⊗ L), then
μ(Ψ) = ∑_i=1^n ( Ψ_i Ψ_i^* - 1/2 | Ψ_i^2 | id).
In particular, for n=1 one recovers the classical three-dimensional Seiberg–Witten equation.
Let = C^∞(Y, S^1) be the group of unitary automorphisms of L. An element u ∈ acts on a pair (A,Ψ) by
u(A, Ψ) := (A - u^-1du, u Ψ).
(A,Ψ) is called irreducible if its –stabiliser is trivial, i.e. Ψ≠ 0, and reducible otherwise.
To make equation (<ref>) elliptic modulo the action of we introduce an additional unknown f ∈ C^∞(Y,i). The modified equation for a triple (A,Ψ,f) reads
{[ _ABΨ + f Ψ = 0,; F_A - ∗ d f - μ(Ψ) - η = 0. ].
In the next proposition we show that this modification does not produce new solutions.
For Ψ, Φ∈(E, S ⊗ L) let
μ(Ψ, Φ) := 1/2{ΨΦ^* + ΦΨ^* - ⟨Ψ, Φ⟩ id}.
Considering μ(Ψ, Φ) as an imaginary-valued two-form, we have
∗ d ( μ(Ψ, Φ) ) = i ( _ABΨ, Φ ) - i ( Ψ, _ABΦ).
This is a straightforward computation; see <cit.>.
If (A, Ψ, f) is a solution of (<ref>), then f is constant and (A,Ψ) is a solution of the original equation (<ref>). Moreover, if Ψ≠ 0, then f = 0.
Applying ∗ d to both sides of the equation F_A - ∗ df - μ(Ψ) - η = 0 results in
0 = i ( _ABΨ, Ψ ) - i ( Ψ, _ABΨ ) - Δ f
= -i ( f Ψ, Ψ ) + i ( Ψ, f Ψ )- Δ f = -( 2| Ψ|^2 + Δ)f,
where we have used that _ABΨ = - f Ψ and f is purely imaginary. If Ψ = 0, then Δ f = 0 and f is constant. If Ψ≠ 0, then Δ +2 | Ψ |^2 is invertible and f=0.
§.§ Deformation theory
We want to study the space of solutions of (<ref>). Although it is naturally equipped with the C^∞–topology, it is convenient to use the Sobolev topology instead.
For k≥ 2 we consider the following Sobolev spaces:
_k(V) W^k,2 connections on a bundle V → Y,
Γ_k(V) W^k,2 sections of V,
Ω^p_k(V) W^k,2 differential forms with values in V,
_k = _k(L) ×Γ_k((E,S⊗ L)) W^k,2 configurations,
^*_k = _k(L) × (Γ_k((E,S⊗ L)) ∖{ 0 }) W^k,2 irreducible configurations,
_k(U) W^k,2_ configurations on an open subset U ⊂ Y,
_k+1 W^k+1,2 gauge transformations of L.
_k and _k^* are Hilbert manifolds modelled on the Hilbert space Ω_k^1(i) ⊕Γ_k((E, S ⊗ L)) and _k+1 is a Hilbert Lie group modelled on Ω^0_k+1(i).
The action of on (L) ×Γ(Y, (E, S ⊗ L)) extends to a smooth, proper action of _k+1 on _k with a metrisable, second countable quotient ℬ_k := _k / _k+1. Moreover, at every point of _k there exists a local slice for the action and, as a result, ℬ_k^* := _k^* / _k+1 has the structure of a Hilbert manifold.
For the definition of local slices and the proof see <cit.>. We only mention that a local slice passing through (A, Ψ) ∈_k can be chosen to have the form
(A,Ψ) + K_A,Ψ,
where K_A,Ψ⊂ T_(A,Ψ)_k is a neighbourhood of zero in G^*, where G is the linearised action of the gauge group at (A,Ψ) and G^* is the formal adjoint of G restricted to _k.
The moduli space of solutions (g,B, η) is the subspace of ℬ_k consisting of gauge equivalence classes of pairs (A, Ψ) satisfying (<ref>) with parameters (g,B,η). We define also the moduli space of irreducible solutions ^*(g,B,η) = (g, B,η) ∩ℬ_k^*.
We will often write σ = (B,η) and denote the corresponding moduli spaces by (g,σ) and ^*(g,σ). When (g,σ) is clear from the context we write simply and ^*.
and ^* are endowed with the subspace topology induced from ℬ_k; hence they are metrisable and second countable. The particular choice of k ≥ 2 is immaterial: if the perturbation σ is of class W^l,2, then the moduli spaces for different choices of k ≤ l+1 are naturally homeomorphic. If moreover l = ∞, then they are all homeomorphic to the space of smooth solutions modulo smooth gauge transformations.
Suppose that σ = (B,η) is of class W^l,2 for some l ≥ 1.
* For every (A,Ψ) ∈_2 satisfying (<ref>) there exists u ∈_3 such that u (A, Ψ) ∈_l+1. Two such gauge transformations differ by an element of _l+2.
* If (A_i, Ψ_i) is a convergent sequence of solutions in _2, then there is a sequence u_i ∈_3 such that u_i(A_i, Ψ_i) are in _l+1 and converge in _l+1.
This is a direct consequence of Hodge theory and elliptic regularity for and d + d^*. Without loss of generality we fix k=2 and drop the subscript k, writing , ℬ, , and so on. The local structure of at a solution (A,Ψ) is encoded in the deformation complex
Γ_3(i) rG_A, Ψ Ω^1_2(i) ⊕Γ_2(E^* ⊗ S ⊗ L) ⊕Ω^0_2( i) rS_A, Ψ Γ_1(E^* ⊗ S ⊗ L) ⊕Ω^2_1( i ).
Here, G_A, Ψ is the linearisation of the gauge group action
G_A,Ψ(h) := (-dh, hΨ, 0),
and S_A, Ψ is the linearisation of the modified equation (<ref>) at (A, Ψ, 0)
S_A,Ψ(a, ϕ, v) := (_ABϕ + a·Ψ + vΨ, da - 2μ(Ψ, ϕ) - ∗ d v).
The complex is well-defined by the Sobolev multiplication theorem. It is elliptic and has index zero. One way to see this is to introduce the formal adjoint
G_A,Ψ^*(a, ϕ, v) := -d^*a + i ( Ψ, ϕ )
and consider the extended Hessian L_A,Ψ = S_A,Ψ⊕ G_A,Ψ^*. After rearranging the direct sums and identifying the spaces of two-forms and functions via the Hodge star, L_A,Ψ is given by
Ω^1_2(i) ⊕Ω^0_2(i ) ⊕Γ_2(E^* ⊗ S ⊗ L) rL_A,Ψ Ω^1_1(i) ⊕Ω^0_1(i ) ⊕Γ_1(E^* ⊗ S ⊗ L),
L_A,Ψ( [ a; v; ϕ ]) =
(
[ ∗ da - dv - 2∗μ(ϕ, Ψ); - d^*a + i ( Ψ, ϕ ); a ·Ψ + v Ψ + _ABϕ ]).
L_A,Ψ is elliptic because up to terms of order zero it agrees with the direct sum of _AB and the total operator of the de Rham complex. Representing L_A,Ψ by the matrix
L_A,Ψ = (
[ ∗ d - d - 2∗μ(Ψ, ·); - d^*a 0 i ( Ψ, · ); Cl( ·) Ψ (·) Ψ _AB ]).
we also see that it is self-adjoint, and so the deformation complex is elliptic and has index zero.
Let H^0_A,Ψ, H^1_A,Ψ, and H^2_A,Ψ be the homology groups of the deformation complex at (A,Ψ). They are finite dimensional vector spaces. If the solution is irreducible, then H^0_A,Ψ = 0 and, since L_A,Ψ is self-adjoint, H^1_A,Ψ = L_A,Ψ and H^2_A,Ψ = coker L_A,Ψ are naturally isomorphic. We will later need an explicit description of these groups.
H^1_A,Ψ consists of pairs (a,ϕ) solving
{[ - d^*a + i ( Ψ, ϕ ) = 0,; _ABϕ + a ·Ψ = 0,; da - 2 μ(Ψ, ϕ) = 0. ].
The proof is similar to that of Proposition <ref>.
We call an irreducible solution (A,Ψ) regular if H^1_A,Ψ = H^2_A,Ψ = 0.
By elliptic regularity for L_A,Ψ, the property of being a regular solution does not depend on the chosen Sobolev setup (compare with <cit.>).
The elliptic complex gives rise to a Kuranishi model for the moduli space.
For an irreducible solution (A, Ψ) there is a smooth map
κ H^1_A, Ψ→ H^2_A, Ψ
such that κ(0) = 0, dκ(0) = 0, and a neighbourhood of (A,Ψ) in ^* is homeomorphic to a neighbourhood of 0 in κ^-1(0).
The proof is standard but we need it for future reference.
Let U be a local slice passing through (A,Ψ) as in (<ref>). Let V = ×Ω^0_2(i), W = Γ_1(E^* × S × L) ×Ω^2_1(i ), and ℱ V → W be given by the left-hand side of (<ref>). The restriction of ℱ to U ×Ω^0_2(i), which we denote by f, is a smooth Fredholm map. By the implicit function theorem, there is a neighbourhood of (A,Ψ) in U ×Ω^0_2(i) and local charts in which f takes the form
f E × d f_A,Ψ,0→ E ×coker d f_A,Ψ,0,
f(e, x) = (e, K(e,x))
for a Banach space E and a smooth map K d f_A,Ψ,0→coker d f_A,Ψ,0. The point (A,Ψ) corresponds to (0,0) in E × d f_A,Ψ,0, and we have K(0,0) = (0,0) and dK(0,0) = 0. Let κ(x) = K(0,x). Then the zero set f^-1(0) is locally homeomorphic to κ^-1(0). This proves the proposition because H^1_A,Ψ≅ d f_A,Ψ,0 and H^2_A,Ψ≅coker df_A,Ψ,0.
The existence of local Kuranishi models allows us to equip ^* with the structure of a real analytic space; since ℱ is real analytic, we can choose the Kuranishi map κ to be analytic as well. Thus, ^* is locally homeomorphic to the real analytic set V = κ^-1(0). As we vary κ and V, the rings of analytic functions / I(V) glue to a globally defined structure sheaf making ^* into a real analytic space; see <cit.>.
In section <ref> we will face a situation in which the moduli space ^* is not regular, but it still has the structure of a smooth manifold.
The moduli space is said to be Zariski smooth if = ^* and for every [A,Ψ] ∈ there is a local Kuranishi model with the map κ H^1_A,Ψ→ H^2_A,Ψ being zero.
For a Zariski smooth moduli space the homology group H^1_A,Ψ is the tangent space to at [A,Ψ]. The groups H^2_A,Ψ form a vector bundle over the moduli space. To see this, consider the trivial bundles over ^*
𝒱_0 := ^* × (Γ_2(E^* ⊗ S ⊗ L) ⊕Ω^2_2(i))
𝒲_0 := ^* × (Ω^1_1(i ) ⊕Γ_1(E^* ⊗ S ⊗ L) ⊕Ω^0_1(i ))
and the section s_0 ∈Γ((𝒱_0, 𝒲_0)) which at a point (A,Ψ) is given by S^*_A,Ψ. The triple (𝒱_0, 𝒲_0, s_0) is -equivariant and descends to a triple (𝒱, 𝒲, s) of vector bundles 𝒱 and 𝒲 over ℬ^* and a section s ∈Γ( ( 𝒱, 𝒲)). By construction, s([A,Ψ]) ≅ H^2_A,Ψ.
The obstruction bundle is the subspace of 𝒱 given by
:= { ([A,Ψ], v) | [A,Ψ] ∈^*, v ∈ s([A,Ψ])}
together with the map →^* induced from the bundle projection 𝒱→ℬ^*.
If is Zariski smooth, then it is a submanifold of ℬ^* and → is a vector bundle isomorphic to the tangent bundle T.
More precisely, is a disjoint union of smooth manifolds of possibly different dimensions. If C is a connected component containing a point [A,Ψ], then C is a smooth submanifold of ℬ^* of dimension H^1_A,Ψ and the restriction of → to C is a smooth vector bundle isomorphic to TC as unoriented real vector bundles. The proof is standard and we omit it.
§.§ Perturbations and transversality
The moduli space (g, σ) depends on the choice of a Riemannian metric g and a parameter σ = (B,η). In this subsection we fix g and study the properties of (g,σ) for a generic choice of σ.
We introduce the following Fréchet manifolds of C^∞ parameters
the space of Riemannian metrics on Y,
the space of closed imaginary-valued two-forms,
:= (E) × the space of perturbations,
and their Sobolev completions _k and _k with respect to the W^k,2-topology.
The space of parameters of the equation, appearing in the introduction and Theorem <ref>, is
:= ×.
However, typically we will fix a Riemannian metric and perturb only σ∈.
Recall that a subset of a topological space is called residual if it contains a countable intersection of open and dense subsets. Baire’s theorem asserts that a residual subset of a complete metric space is dense.
For every g ∈, the set
{σ∈ | all solutions in ^*(g, σ) are regular}
is residual in .
We first prove the corresponding statement for _k in place of , with solutions in the configuration space _k+1^*. Consider the smooth map
𝒮_k ×_k+1^* ×Ω^0_k+1(i) ⟶Γ_k(E^* ⊗ S ⊗ L) ×Ω^2_k(i )
𝒮(B, η, A, Ψ, f) = ( _A BΨ + f Ψ, F_A - ∗ d f- μ(Ψ) - η).
We will show that 𝒮 is a submersion at all irreducible solutions.
Let (A,Ψ) ∈_k+1^* be such that 𝒮(B,η, A, Ψ, 0) = 0. The derivative at (B, η, A, Ψ, 0) is
d 𝒮(b, ξ, a, ϕ, v) = S_A,Ψ (a, ϕ, v) + (b ·Ψ, -ξ).
Recall that S_A,Ψ is the linearisation operator (<ref>) introduced in the previous subsection. Since S_A,Ψ⊕ G^*_A,Ψ is Fredholm, so is d 𝒮⊕ G^*_A,Ψ, and the image of d 𝒮 is closed and of finite codimension. Suppose that (Ξ, ω) is L^2–orthogonal to the image of d 𝒮. We will prove ω = 0.
First, observe that (Ξ,ω) satisfies
- i ( Ψ, Ξ )+ ∗ d ω = 0,
_ABΞ - 2 ω·Ψ = 0,
d^* ω + μ(Ψ,Ξ) = 0.
The first equation is a consequence of varying a alone, the second of varying ϕ alone, and the third of varying v alone.
On the other hand, ω is L^2–orthogonal to _k, as a result of varying ξ∈_k alone, so in particular d^* ω = 0. Applying ∗ d^* to μ(Ψ,Ξ) = 0 and using <cit.>, we obtain
0 = ∗ d^*μ(Ψ,Ξ) = μ(Ψ, _ABΞ) - ∗i/2(∇_ABΨ, Ξ) - ∗i/2 i (Ψ, ∇_ABΞ)
= μ(Ψ, 2ω·Ψ) + d^* d ω.
Taking the L^2-inner product with ω gives us
2 ω·Ψ_L^2^2 + d ω_L^2^2 = 0,
so d ω = 0. We conclude that ω = 0, _ABΞ = 0, and (Ψ, Ξ )= 0. Next, we prove Ξ = 0. Suppose by contradiction that Ξ is not identically zero. By the unique continuation theorem for harmonic spinors <cit.>, the set { | Ξ | > 0 } is open and dense in Y. The same is true for Ψ and so there exists x ∈ Y such that | Ψ(x) | > 0 and | Ξ(x) | > 0. On the other hand, for all a ∈Ω^1_k+1(i ) and b ∈Ω^1_k( 𝔰𝔲(E) )
0 = ⟨ d 𝒮( b, 0, a, 0 , 0 ) , Ξ⟩_L^2 = ((a+b) ·Ψ , Ξ )_L^2.
However, we can find a(x) ∈Λ^1_x ⊗ i and b(x) ∈Λ^1_x ⊗𝔰𝔲(E_x) such that
( (a(x) + b(x)) ·Ψ(x), Ξ(x))> 0.
This is an elementary fact of linear algebra; it is obvious when E =1 and for E ≥ 2 see Lemma <ref> below. We extend a(x), b(x) to differential forms a and b satisfying
( (a+b) ·Ψ , Ξ )_L^2 > 0.
This is a contradiction; we conclude that Ξ = 0.
We have shown that 𝒮 is a submersion at all irreducible solutions. By the implicit function theorem, 𝒮^-1(0) is a submanifold of _k ×_k+1^* ×Ω^0_k+1(i). The gauge group _k+2 acts freely on 𝒮^-1(0). Denote the quotient space by 𝒳. The projection on _k induces a smooth map π𝒳→_k whose fibre over σ∈_k is ^*(g,σ). The derivative dπ is Fredholm and at a point (A, Ψ) ∈^*(g,σ) there is a natural identification coker dπ_A,Ψ = H^2_A,Ψ. By the Sard-Smale theorem, the set of regular values of π is residual in _k. For every such regular value σ all solutions in ^*(g,σ) are regular. This proves the statement for _k.
The final part of the proof is to replace _k by . We follow an argument due to Taubes used in the context of pseudo-holomorphic curves <cit.>. For N > 0 define _N ⊂ as the set of all σ such that all solutions (A,Ψ) in ^*(g,σ) satisfying
1/N≤Ψ_L^∞≤ N
are regular. We need to show that the intersection ⋂_N ≥ 1_N is residual. First, we show that _N is open. Let σ_i be a sequence in ∖_N converging to σ in C^∞. By the definition of _N, there is a sequence of solutions (A_i, Ψ_i) solving equations (<ref>) with parameters σ_i, satisfying (<ref>) and H^2_A_i, Ψ_i≠ 0. By the first part of Theorem <ref> from the next subsection, after passing to a subsequence (A_i,Ψ_i) converges smoothly modulo gauge. The limit (A,Ψ) satisfies condition (<ref>) and the equations with parameter σ. Since L_A_i, Ψ_i converges to L_A,Ψ, the latter operator is not surjective and σ∉_N. We conclude that ∖_N is closed.
The last step is to show that _N is dense in . We use the statement for _k proved earlier. Let _k,N be the subspace of _k defined analogously to _N. Since _k,N contains the set of regular values of π discussed earlier, it is dense in _k. It is also open in _k by the same argument as before. By Proposition <ref> and Remark <ref>, _N = _k,N∩ for all k ≥ 2. Every element σ of can be approximated in the W^k,2–topology by elements of _k,N. In fact, it can be approximated by elements of the intersection _k,N∩ because _k,N is open in _k and is dense in _k. Therefore, we have shown that σ can be approximated by elements in _N in any Sobolev norm. By the usual diagonal argument and the Sobolev embedding theorem, we can find a sequence in _N converging to σ in C^∞. This shows that _N is dense in .
Let n ≥ 2 and V_n = ^n be the standard representation of (n).
For every pair of non-zero vectors v, w ∈ V_2 ⊗ V_n there exists b ∈(2) ⊗(n) such that
( bv, w )> 0.
The proof is similar to that of <cit.>.
Let e_1, …, e_n be an orthonormal basis of V_n.
Write v and w as
v = ∑_i=1^n v_i ⊗ e_i and
w = ∑_i=1^n w_i ⊗ e_i,
for v_i, w_i ∈ V_2.
Likewise, denoting by σ_1, σ_2, σ_3 the standard basis of (2), we can write b as
b = ∑_k=1^3 σ_k ⊗ b_k
for some b_k ∈(n), so that
( bv, w ) = ∑_k=1^3 ∑_i,j ( σ_k v_i, w_j ) ( e_i, b_k e_j).
Suppose by contradiction that ( bv, w ) = 0 for all b ∈(2) ⊗(n).
In particular, setting b_k to be elementary off-diagonal anti-Hermitian matrices, we see that for k=1,2,3 and i ≠ j
( σ_k v_i, w_j ) - ( σ_k v_j, w_i ) = 0,
i ( σ_k v_i, w_j ) + i ( σ_k v_j, w_i ) = 0.
Hence,
( σ_k v_i, w_j ) = 0
for k=1,2,3 and i≠ j.
Suppose without loss of generality that v_1 ≠ 0.
Then σ_1 v_1, σ_2 v_1, σ_3 v_1 are linearly independent over and thus span V_2 over .
It follows that w_j = 0 for j = 2, 3, … n.
On the other hand, setting b_k = diag(1,-1,0,…, 0) ∈(n), we obtain that for k=1,2,3
( σ_k v_1, w_1 ) = 0,
which shows that w_1 = 0 and so w = 0, yielding a contradiction.
§.§ Reducible solutions
The moduli space might contain reducible solutions at which it develops singularities. In this paper we focus on the favourable case when reducibles can be avoided. As in the classical setting <cit.>, this is guaranteed by b_1(Y) > 1.
For a fixed g ∈ the subset of parameters σ∈ for which (g, σ) contains a reducible solution is contained in a closed affine subspace of of codimension b_1(Y).
If (A, 0) is a reducible solution of equations (<ref>) with a parameter σ = (B, η), then F_A = η and passing to the de Rham cohomology we obtain
[ η ] = - 2π i c_1(L) ∈ H^2(Y, i).
Consider the affine subspace of given by
V = {η∈ | [ η] = - 2π i c_1(L) }.
In other words, V is the preimage of -2π i c_1(L) under the projection π→ H^2(Y, ) associating to each closed form its de Rham class. The map π is linear and surjective. It is also continuous because by the Hodge decomposition theorem it is continuous with respect to the W^k,2–topology for all k. Therefore, V is a closed affine subspace of codimension H^2(Y, ) = H^1(Y, ) = b_1.
§.§ Compactness and Fueter sections
The moduli space of multi-monopoles needs not be compact.
Haydys and Walpuski <cit.> studied sequences of solutions whose L^2–norms diverge to infinity. They proved that in the limit we obtain a harmonic spinor taking values in the zero set μ^-1(0). In general, it is defined only on the complement of a closed, nowhere dense subset of Y.
Let Z ⊊ Y be a closed, proper subset and (A, Ψ) ∈_k(Y ∖ Z) (see Definition <ref>). A triple (A, Ψ, Z) is called a Fueter section[For the explanation of this name see <cit.>, <cit.>, <cit.>.] with singular set Z if it satisfies
* _ABΨ = 0 and μ(Ψ) = 0,
* ∫_Y ∖ Z |Ψ|^2 =1 and ∫_Y∖ Z | ∇_A Ψ |^2 < ∞,
* | Ψ | extends to a Hölder continuous function on Y such that Z = | Ψ |^-1(0).
The definition of a Fueter section depends, through the Dirac equation, on the choice of the metric g and the background connection B. We will call (A,Ψ, Z) a Fueter section with respect to (g,B) when we want to stress this dependence.
There is elliptic regularity theory for Fueter sections <cit.>, which implies that if B is of class W^l,2, then after a gauge transformation (A,Ψ) is in _l+1(Y ∖ Z). Taubes <cit.> proved that the singular set of a Fueter section has Hausdorff dimension at most one.
Let (g_i, σ_i) ∈× be a sequence converging to (g, σ) and [A_i, Ψ_i] a sequence in ℬ such that [A_i, Ψ_i] ∈(g_i, σ_i).
* If lim sup_i →∞Ψ_i _L^2 < ∞, then after passing to a subsequence and applying gauge transformations the sequence (A_i, Ψ_i) converges smoothly to a solution (A,Ψ) representing a point in (g, σ).
* If lim sup_i →∞Ψ_i _L^2 = ∞, then there exists a Fueter section (A, Ψ, Z) with respect to (g,B) such that after passing to a subsequence and applying gauge transformations A_i → A weakly in W^1,2_ and Ψ_i / Ψ_i _L^2→Ψ weakly in W^2,2_ over Y ∖ Z.
Let g ∈ and σ = (B, η) ∈. If there are no Fueter sections with respect to (g,B), then there is an open neighbourhood of (g,σ) in × such that for all (g',σ') from this neighbourhood the moduli space (g', σ') is compact.
Let K = sup{Ψ_L^2 | [A,Ψ] ∈(g, σ)}. We have K < ∞ by Theorem <ref> and the assumption that there are no (g, B)-Fueter sections. We claim that there is an open neighbourhood of (g, σ) such that for all (g',σ') from this neighbourhood
sup_[A,Ψ] ∈(g',σ')Ψ_L^2 < K + 1,
and the proposition follows by Theorem <ref>. Assume by contradiction that there is a sequence (g_i, σ_i) converging to (g,σ) and violating (<ref>). There is a corresponding sequence (A_i, Ψ_i) representing points in (g_i, σ_i) and satisfying Ψ_i _L^2≥ K+1. The first alternative of Theorem <ref> cannot hold because otherwise we would extract a subsequence converging modulo gauge to a solution (A,Ψ) with [A,Ψ] ∈(g,σ) and Ψ_L^2≥ K +1 > K. On the other hand, the second alternative would imply the existence of a (g, B)-Fueter section.
§.§ Orientations
If ^* is regular, one prescribes an orientation to every point in ^*. We briefly outline this construction as we will need it later. For (A,Ψ, f) ∈×Ω^0_2(i) let S_A,Ψ,f be the linearisation of (<ref>) at (A,Ψ, f)[Previously we have only dealt with the linearisation at irreducible solutions for which f = 0.] and let L_A,Ψ, f = G_A,Ψ^* ⊕ S_A,Ψ,f. Let Det→^* ×Ω^0_2(i ) be the determinant line bundle of the family L_A,Ψ,f; see <cit.>. As explained in <cit.>, Det descends to a real line bundle Λ→ℬ^* ×Ω^0_2(i ) which is shown to be trivial by considering the –equivariant homotopy { L_A, tΨ, tf}_t ∈ [0,1] connecting L_A,Ψ,f with L_A,0,0 = (d + d^*) ⊕_AB. The kernel and cokernel of d + d^* are naturally identified with H^0(Y,)⊕ H^1(Y,), while _AB is complex-linear and has trivial determinant. It follows that the determinant bundle of the family of operators L_A,0,0 is trivial, and therefore so is Λ. Denote the global orientation on Λ given by this trivialisation by or_Λ.
On the other hand, L_A,Ψ, f is formally self-adjoint, so coker L_A,Ψ, f and L_A,Ψ,f are naturally isomorphic. More precisely, for a self-adjoint Fredholm operator T we define the Knudsen–Mumford isomorphism ( T) ⊗ (coker T)^* → by
ω⊗η↦ (-1)^ T ( T + 1) / 2η ( ω),
where coker T = T^* = T. These isomorphisms do not yield a global trivialisation of Det because the dimension of the kernel jumps. However, they induce a natural trivialisation of Det over every stratum of ℬ^* ×Ω^0_2(i) over which L_A,Ψ, f has constant rank <cit.>. We denote the induced orientation on Λ, restricted to every stratum, by or_0.
If ^* is regular, we define for every x ∈^*
(x) := {[ +1 if or_Λ(x) = or_0(x); -1 if or_Λ(x) ≠or_0(x) ].
The definition can be extended to the case when is Zariski smooth since for every connected component C of the dimension of L_A,Ψ,f is constant along C.
If is Zariski smooth and C is a connected component of , set
(C) := {[ +1 if or_Λ(x) = or_0(x); -1 if or_Λ(x) ≠or_0(x) ].
where x is any point in C.
Alternatively, we have Λ = T⊗ ( )^*, so the trivialisation or_Λ induces a relative orientation on → which orients the zero set of every transverse section of .
§.§ Counting solutions
We come to the main point of this section, that is defining the signed count of Seiberg–Witten multi-monopoles. The next two propositions are deduced in the standard way from Proposition <ref>, Proposition <ref>, and the discussion of orientations.
For every g∈ there is a residual subset ^reg(g) ⊂ such that for every σ∈^reg(g) the moduli space ^*(g, σ) is an oriented zero-dimensional manifold. If b_1(Y) > 0, we may additionally assume that there are no reducible solutions and ^*(g, σ) = (g, σ).
Let σ_0 and σ_1 be perturbations in . Denote by (σ_0, σ_1) the space of smooth paths {σ_t }_t ∈ [0,1] of elements in connecting σ_0 and σ_1. The subspace topology induced from the space C^∞([0,1], ) makes (σ_0, σ_1) into a Fréchet manifold.
For every smooth path { g_t }_t ∈ [0,1] in and σ_0 ∈^reg(g_0), σ_1 ∈^reg(g_1) there is a residual subset of (σ_0, σ_1) such that for all smooth paths {σ_t }_t∈[0,1] from this subset the one-parameter moduli space
⋃_t ∈ [0,1]^*(g_t, σ_t)
is an oriented one-dimensional cobordism between ^*(g_0, σ_0) and ^*(g_1, σ_1). If b_1(Y) > 1, we may additionally assume that ^*(g_t, σ_t) = (g_t, σ_t) for all t ∈ [0,1].
If (g,σ) is compact and consists of irreducible and regular solutions, set
(g,σ) := ∑_x ∈(g,σ)(x).
The definition of can be extended to the case when is compact and Zariski smooth.
If (g,σ) is compact and Zariski smooth, set
(g,σ) := ∑_C(C) χ(C),
where the sum is taken over all connected components of and χ is the Euler characteristic.
The Gauss–Bonnet theorem and the Poincaré–Hopf index theorem give us two equivalent descriptions of .
Assume that (g,σ) is compact and Zariski smooth.
* If s is a transverse section of →(g,σ), then
(g,σ) = ∑_x ∈ s^-1(0)(x),
where (x) is obtained by comparing the orientation on s^-1(0) induced fom the relative orientation on → with the natural orientation of a point.
* Suppose that all components of are orientable and orient them arbitrarily. The relative orientation on → makes into an oriented real vector bundle. If e() is the Euler class of , then
(g,σ) = ∫_ e( ).
For (g,σ) ∈ such that (g,σ) consists of irreducible and regular solutions (g,σ) is well-defined as in Definition <ref>. If (g,σ) ∈ is an arbitrary point, define (g,σ) to be equal to (g',σ') for any (g',σ') in the same connected component of and such that (g',σ') consists of irreducible and regular solutions. For two such pairs (g_0, σ_0) and (g_1, σ_1), by Proposition <ref> there is a path (g_t, σ_t) in such that the union
𝒲 := ⋃_t ∈ [0,1](g_t, σ_t)
is an oriented compact cobordism from (g_0, σ_0) to (g_1, σ_1). It follows that (g_0, σ_0) = (g_1, σ_1) and so (g,σ) is well-defined for any (g,σ) ∈. It remains to show that if (g,σ) is Zariski smooth, then the integer obtained in this way is equal to that from Definition <ref>. This is a general fact; see <cit.>.
§ A DIMENSIONAL REDUCTION
We focus on the case Y = S^1 ×Σ. Assuming that the parameters of the equation are invariant in the circle direction, we show that irreducible Seiberg–Witten multi-monopoles on Y are pulled back from configurations on Σ obeying a generalised vortex equation. This is analogous to the correspondence between classical Seiberg–Witten monopoles and vortices, which in turn correspond to effective divisors <cit.>, <cit.>, <cit.>.
§.§ Seiberg–Witten equations and quaternionic representations
Equation (<ref>) is an example of the Seiberg–Witten equation associated with a quaternionic representation introduced in <cit.> and <cit.>; see also <cit.>, <cit.>, <cit.>, <cit.>. The language of quaternionic representations is well-suited for proving the circle-invariance of solutions.
Let M be a quaternionic vector space equipped with a Euclidean inner product ⟨· , ·⟩ compatible with the complex structures i, j, k. Let (M) be the group of quaternionic automorphisms of M preserving ⟨· , ·⟩. Suppose that G and H are compact connected Lie groups together with a representation G × H →(M). There is an associated hyperkähler moment map μ M →𝔰𝔲(2) ⊗𝔤, where 𝔤 is the Lie algebra of G, determined by the identity
⟨ a, μ(x) ⟩ = ⟨ a x, x ⟩
for all x ∈ M and a ∈𝔰𝔲(2) ⊗𝔤. Here, we think of 𝔰𝔲(2) as the space of imaginary quaternions.
Let Y be a Riemannian spin three-manifold as before. Let P → Y and Q → Y be principal bundles with structure groups G and H, respectively. Define
𝕄 = (P ×_Y Q ×_Y S) ×_G × H × SU(2) M,
𝕄 has the structure of a left module over the Clifford bundle of Y. The moment map descends to a fibre-preserving map μ𝕄→Λ^2 Y⊗𝔤_P, where 𝔤_P is the adjoint bundle of P.
Fix a connection B on Q. For any connection A on P, denote by ∇_AB the covariant derivative on 𝕄 induced from A, B, and the Levi–Civita connection. The pair (𝕄, ∇_AB) is a Dirac bundle in the sense of <cit.>, and as such is equipped with the Dirac operator _ABΓ(Y,𝕄) →Γ(Y,𝕄). The Seiberg–Witten equation associated with the quaternionic representation (G× H,M) is the following differential equation for a pair (A,Ψ) ∈(P) ×Γ(𝕄):
{[ _ABΨ = 0,; F_A = μ(Ψ). ].
A pair (A,Ψ) ∈(P) ×Γ(Y, 𝕄) is called irreducible if there exists x ∈ Y such that the G–stabiliser of Ψ(x) in M is trivial.
The Seiberg–Witten equation with multiple spinors (<ref>) corresponds to
M = ⊗̋^n, H = (n), G = (1)
with the standard representations.
§.§ Circle-invariance of solutions
The goal of the next few paragraphs, which will be achieved in Proposition <ref>, is to describe solutions of (<ref>) under the following assumptions:
* Y = S^1 ×Σ for a closed Riemann surface Σ; we endow Y with the product metric.
* The spin structure on Y is induced from a spin structure K^1/2 on Σ [Recall that a spin structure on Σ is equivalent to a complex line bundle K^1/2→Σ together with an isomorphism of K^1/2⊗ K^1/2 with the canonical bundle K = Λ^1,0Σ. As a principal bundle, S → Y is pulled back from the (2)–bundle associated to K^-1/2, the dual of K^1/2, via the inclusion (1) ↪(2). As a vector bundle, S = K^1/2⊕ K^-1/2.].
* Q and B are pulled back from a bundle and a connection on Σ.
* P is pulled back from a bundle over Σ [This will eventually follow from the existence of an irreducible solution of (<ref>).].
To keep the notation simple we use the same symbols K^1/2, P, Q, B, and so forth for the corresponding objects over Σ and its pull-back to Y. Observe that 𝕄→ Y is pulled back from
𝕄 = ( P ×_Σ Q ×_Σ K^-1/2 ) ×_G × H ×(1) M.
The action of unit quaternions (2) on M rotates the sphere of complex structures with (1) ⊂(2) being the stabiliser of i; thus, 𝕄→Σ is a complex vector bundle. Consider the quaternionic vector bundle V = (P ×_Σ Q) ×_G × H M; then 𝕄 = V ⊗_ K^-1/2, where the complex structure on V is given by i. The remaining part of the quaternionic structure is encoded in an anti-linear involution j V → V. Taking the tensor product of j with the anti-linear map K^1/2→ K^-1/2 given by the metric, we obtain an anti-linear isomorphism
σ V ⊗ K^1/2→ V ⊗ K^-1/2.
We define similarly a map in the opposite direction, also denoted by σ, so that σ^2 = -1. Equivalently, σ can be seen as a map σ𝕄⊗ K →𝕄, which is a two-dimensional manifestation of the Clifford multiplication.
Next, we relate sections and connections on Y to those on Σ. Write (Σ, P) and (Y, P) for the spaces of connections on P →Σ and its pull-back to Y. Let t ∈ [0,1] be the coordinate on S^1 in S^1 ×Σ. Any connection A_Y ∈(Y,P) can be uniquely written in the form
A_Y = A(t) + b(t) dt
for one-periodic families A(t) of connections in (Σ,P) and sections b(t) of 𝔤_P. (Σ, P) embeds in (Y,P) by pulling-back connections. Its image consists of those A_Y = A(t) + bdt for which b(t) = 0 and A(t) is independent of t. Likewise, any Ψ∈Γ(Y, 𝕄) can be identified with a one-periodic family Ψ(t) ∈Γ(Σ, 𝕄) and Γ(Σ, 𝕄) embeds into Γ(Y, 𝕄) as sections independent of t. The gauge group (Σ, P) is naturally a subgroup of (Y, P).
A circle-invariant configuration is an element of the image of
(Σ, P) ×Γ(Σ, 𝕄) ↪(Y,P) ×Γ(Y, 𝕄).
We will identify a circle-invariant configuration on Y with the corresponding pair on Σ.
If two circle-invariant configurations differ by g ∈(Y,P), then g ∈(Σ, P). In particular, (Σ, P) ×Γ(Σ, 𝕄) / (Σ, P) is a submanifold of (Y,P) ×Γ(Y, 𝕄) / (Y,P).
The Dirac operator on Y can be expressed in terms of the Dolbeault operator on Σ. In the simplest case Y = ^3 = × and M = ℍ, denoting coordinates on × by t and z = x + iy, we have for a map Ψℝ^3 →ℍ
Ψ = i ∂Ψ/∂ t + j ∂Ψ/∂ x + k ∂Ψ/∂ y = ∂Ψ/∂ t + j ( ∂Ψ/∂ x - i ∂Ψ/∂ y) = i ∂Ψ/∂ t + 2 j ∂Ψ/∂ z.
In general, ∂ / ∂ z is replaced by the Dolbeault operator
∂_ABΓ(Σ, 𝕄) →Γ(Σ, 𝕄⊗ K)
or equivalently
∂_ABΓ(Σ, V ⊗ K^-1/2) →Γ(Σ, V ⊗ K^1/2),
which is defined as the (1,0)–part of the covariant derivative on 𝕄. The proof of the next lemma is a simple calculation in conformal coordinates, almost the same as (<ref>) [The difference between the constants 2 in (<ref>) and √(2) in Lemma <ref> comes from the fact that |dz| = √(2) with respect to the Euclidean metric on .].
Let A_Y = A(t) + b(t)dt be a connection in (Y, P) and Ψ = Ψ(t) a section in Γ(Y, 𝕄). The Dirac operator _A_Y, B acting on Γ(Y, 𝕄) is given by
_A_Y BΨ = i ( ∂Ψ/∂ t + b(t) Ψ)+ √(2)σ( ∂_A(t)BΨ).
In the dimensionally-reduced setting, we use the splitting of the hyperkähler moment map μ M →𝔤⊗^3 into the real and complex parts: μ_ M →𝔤 and μ_ M →𝔤⊗. If μ = (μ_i,μ_j,μ_k) are the three components of μ, then μ_ = μ_i and μ_ = μ_j + i μ_k. The following identity will be useful later:
⟨μ_(x) j x, x ⟩ = | μ_(x) |^2
Under the reduction of the structure group of Y from (3) to (1), the splitting ^3 = ⊕ gives us 𝔰𝔲(S) = ⊕ K^-1. Accordingly, μ𝕄→𝔰𝔲(S) ⊗𝔤_P splits into the direct sum of
μ_𝕄→𝔤_P and μ_𝕄→ K^-1⊗𝔤_P.
μ_ is holomorphic when restricted to fibres. Similarly, we have the conjugate maps
μ_𝕄→𝔤_P and μ_𝕄→ K ⊗𝔤_P,
which satisfy
μ_∘σ = - μ_ and μ_∘σ = μ_.
Let A_Y = A(t) + b(t)dt a connection in (Y,P) and Ψ = Ψ(t) a section in Γ(Y, 𝕄). The generalised Seiberg–Witten equation (<ref>) for (A_Y, Ψ) is equivalent to
{[ i (∂Ψ/∂ t + b Ψ) + √(2)σ( ∂_ABΨ) = 0,; ( ∂ A/∂ t + d_A b )^0,1 = - i/2μ_(Ψ),; ∗ F_A = μ_(Ψ). ].
In particular, for a circle-invariant configuration (A,Ψ) the equation simplifies to
{[ ∂_ABΨ = 0,; μ_ (Ψ) = 0,; ∗ F_A = μ_ (Ψ). ].
By Lemma <ref>, the first equation in (<ref>) is equivalent to _A_Y BΨ = 0. The remaining two equations are obtained from the identifications
𝔰𝔲(S) ≅Λ^0 Σ⊕ K^-1 and μ≅μ_⊕μ_
discussed earlier. Under the decomposition
Λ^2 Y = ( Λ^2 Σ) ⊕( Λ^1 S^1 ⊗Λ^1 Σ)
the curvature F_A_Y decomposes into
F_A_Y = F_A + dt ∧( ∂ A/∂ t + d_A b ).
We need to identify the splittings (<ref>) and (<ref>) under the isomorphism Λ^2 Y ≅𝔰𝔲(S). For simplicity, consider the flat case Y = ×, with coordinates t and z = x + i y—the general case differs from it by a conformal factor. The isomorphism Λ^2 ^3 ≅𝔰𝔲(2) is given by
dx ∧ dy ↦ i , dy ∧ dt ↦ j, dt ∧ dx ↦ k.
On the other hand, 𝔰𝔲(2) is identified with ⊕ via the map
ai + bj + ck ↦ (a, b+ic).
Let α + dt ∧β be a two-form on ^3, where
α = a dx ∧ dy, β = b_1 dx + b_2 dy.
Under the identifications Λ^2 ^3 = 𝔰𝔲(2) = ⊕,
α + dt ∧β↦ a i - b_2 j + b_1 k ↦ (a, -b_2 + i b_1).
Observe that a = ∗α and (- b_2 + ib_1) dz̅ = 2i β^0,1, where β^0,1 is (0,1)-part of β. It follows that under the splittings (<ref>) and (<ref>) the isomorphism
( Λ^2 Σ) ⊕( Λ^1 S^1 ⊗Λ^1 Σ) ≅Λ^0 Σ⊕ K^-1
is the direct sum of the Hodge star Λ^2 Σ→Λ^0 Σ and the map Λ^1 Σ→Λ^0,1Σ taking a one-form β to 2i β^0,1. Thus, F_A_Y = μ(Ψ) is equivalent to the last two equations in (<ref>).
Since it is more common to consider holomorphic rather than aholomorphic sections, we can complete the picture by considering the conjugate bundle
𝕄 = (Q × P × K^1/2) ×_G × H × U(1) M = V ⊗ K^1/2 = 𝕄⊗ K.
We have the Dolbeault operators
∂_ABΓ(Σ, 𝕄) = Γ(Σ, V ⊗ K^-1/2) ⟶Γ(Σ, V ⊗ K^1/2 ) = Γ(Σ, 𝕄),
_ABΓ(Σ, 𝕄) = Γ(Σ, V ⊗ K^1/2) ⟶Γ(Σ, V ⊗ K^-1/2 ) = Γ(Σ, 𝕄),
and the maps σ𝕄→𝕄 and σ𝕄→𝕄 that intertwine them:
σ∂_AB = _ABσ .
Thus, σ maps aholomorphic sections of 𝕄 to holomorphic sections of 𝕄 and vice versa. It follows from the Kähler identities that
_AB = - ∂_AB^*
where ∂_AB^* is the formal adjoint of ∂_AB.
Using σ, we can rewrite (<ref>) as a system of equations for Ψ := σ(Ψ) ∈Γ(Σ,𝕄):
{[ _ABΨ = 0,; μ_(Ψ) = 0,; ∗ F_A + μ_(Ψ) = 0. ].
This is an example of a symplectic vortex equation discussed in <cit.>. The target singular symplectic space[For example, for the classical Seiberg–Witten equation μ^-1_(0) = { (x,y) ∈^2 | xy = 0 }.] is the zero locus μ_^-1(0) ⊂ M.
We now proceed to the main result of this section.
Suppose that conditions (A), (B), (C) listed at the beginning of this subsection are satisfied. If (A_Y, Ψ) is an irreducible solution of the generalised Seiberg–Witten equation (<ref>), then P → Y is pulled-back from a bundle over Σ (that is: condition (D) is satisfied) and (A_Y, Ψ) is gauge-equivalent to a circle-invariant configuration obeying equation (<ref>).
Theorem <ref> asserts that there is a natural one-to-one correspondence between gauge equivalence classes of solutions of the Seiberg–Witten equation on Y = S^1 ×Σ with the hyperkähler target M and gauge equivalence classes of solutions of the symplectic vortex equation on Σ with the Kähler target μ_^-1(0).
Assume for simplicity that the flavour symmetry H is trivial—the general proof is the same after adjusting the notation. Identify S^1 with [0,1] with the endpoints glued together. Pull-back the data on Y=S^1 ×Σ to one-periodic data on [0,1] ×Σ. Since [0,1] ×Σ is homotopy equivalent to Σ, there is a principal G-bundle P_Σ→Σ and a gauge transformation g ∈(Σ, P_Σ) such that P is the quotient of [0,1] × P_Σ by the relation (0,p) ∼ (1, g(p)). The isomorphism class of P depends only on the homotopy class of g. Similarly, 𝕄 = (P × S) ×_G × SU(2) M over Y is obtained from pulling-back 𝕄_Σ = (P_Σ× K^-1/2)_G × U(1) M to [0,1] ×Σ and identifying the fibres over 0 and 1 using g. As before there is an anti-linear map σ𝕄_Σ⊗ K →𝕄_Σ.
For A_Y ∈(Y,P) and Ψ∈Γ(Y, 𝕄) we have
A_Y = A(t) + b(t) dt, Ψ = Ψ(t),
where A(t), b(t), and Ψ(t) are families of connections and sections on Σ, as discussed earlier. The only difference now is that the families are periodic with respect to the action of g:
A(1) = g ( A(0) ), b(1) = g (b(0) ), Ψ(1) = g( Ψ(0) ).
Define a gauge transformation h over [0,1] ×Σ by
h(t) = exp( ∫_0^t b(s) ds ).
h does not necessarily descend to an automorphism of P → Y; this happens if and only if h(1) = Ad_g(h_0) = id. In any case, h is well-defined over [0,1]×Σ and the new connection
C h(A_Y) = A_Y - h^-1 d_A_Y h = h(t)( A(t) )
does not have a dt part—it is in a temporal gauge. Thus, it is identified with a path of connections { C(t) }_t ∈ [0,1] on P_Σ satisfying C(1) = h(1)g(C(0)). Likewise, we identify the section h(Ψ) with a path {Φ(t) }_t∈[0,1] of sections of 𝕄_Σ→Σ satisfying Φ(1) = h(1)g(Φ(0)).
By Proposition <ref>, the Seiberg–Witten equation for (C,Φ) is equivalent to
{[ i ∂Φ/∂ t + √(2)σ( ∂_C Φ) = 0,; ( ∂ C/∂ t)^0,1 = - i/2μ_( Φ ),; ∗ F_C = μ_(Φ). ].
Differentiating the first equation with respect to t and using (<ref>), we obtain
0 = i ∂^2 Φ/∂ t^2 + √(2)σ{( ∂ C/∂ t)^1,0Φ + ∂_C ( ∂Φ/∂ t) }
= i ∂^2 Φ/∂ t^2 + √(2)/2σ iμ_ (Φ) Φ + 2 σ i∂_C σ∂_C Φ
= i ∂^2 Φ/∂ t^2 - √(2)/2 i μ_(Φ) σΦ - 2 i σ∂_C σ∂_C Φ.
We have used the anti-linearity of σ and the fact that ∂ C / ∂ t is a real 𝔤–valued one-form, so its (1,0) part is conjugate to the (0,1) part. Multiplying the obtained identity by i and taking the pointwise inner product with Φ yields
0 = ⟨ - ∂^2 Φ/∂ t^2, Φ⟩ + √(2)/2⟨μ_(Φ) σΦ, Φ⟩ + 2 ⟨σ∂_C σ∂_C Φ , Φ⟩.
By formula (<ref>) the second term simplifies to √(2)/2 | μ_(Φ)|^2. Remark <ref> implies that
σ∂_C σ∂_C = _C ( σ^2 ) ∂_C = - _C ∂_C = ∂_C^* ∂_C.
We conclude that
⟨σ∂_C σ∂_C Φ , Φ⟩_L^2(Σ) = ⟨∂_C^* ∂_C Φ, Φ⟩_L^2(Σ) = ∂_C Φ_L^2(Σ)^2.
For a fixed value of t integration of (<ref>) over Σ yields
0 = ∫_Σ⟨ - ∂^2 Φ/∂ t^2 , Φ⟩vol_Σ + √(2)/2μ_( Φ) _L^2(Σ)^2 + 2 ∂_C Φ_L^2(Σ)^2.
Integrate the last equality by parts with respect to t ∈ [0,1]. The boundary terms vanish because Φ is periodic up to the action of h(1)g which preserves the inner product. We obtain
0 = ∂Φ/∂ t^2_L^2 + √(2)/2μ_(Φ) ^2_L^2 + 2 ∂_C Φ^2_L^2,
which shows that
∂Φ/∂ t = 0, ∂ C/∂ t = 0.
Thus, the families C(t) = C and Φ(t) = Φ are constant and
C = C(1) = k ( C(0) ) = k(C), Φ = Φ(1) = k( Φ(0) ) = k(Φ)
for the gauge transformation k = h(1)g over Σ. The first equality implies d_C k = 0, so k is covariantly constant. On the other hand, by irreducibility, there exists a point x ∈Σ such that the G–stabiliser of Φ(x) is trivial. Hence, k(x) = id, so k = id everywhere and g = h(1)^-1. The path h(t)^-1 is a homotopy of gauge transformations connecting g with h(0)^-1 = id and so P → Y is pulled-back from P_Σ→Σ. In particular, we could have chosen g=id, then h(1) = id and h descends to a gauge transformation of P mapping (A_Y, Ψ) to the circle-invariant solution (C,Φ). By Proposition <ref>, (C,Φ) satisfies equation (<ref>).
Much of this discussion can be extended to the setting when M is a hyperkähler manifold with an isometric (2)–action rotating the sphere of complex structures. The Dirac operator _AB and equation (<ref>) have natural generalisations <cit.>. For Y = S^1 ×Σ one introduces the non-linear Dolbeault operator ∂_AB as in <cit.> so that Lemma <ref> and Proposition <ref> hold. However, our proof of Theorem <ref> makes use of the vector space structure on M and does not immediately generalise to the non-linear setting. We expect the result to be true but in the proof one should use the Weitzenböck formulae for non-linear Dirac operators <cit.>.
§.§ An abelian vortex equation
We apply Theorem <ref> to the Seiberg–Witten equation with multiple spinors. In this case, M = ⊗̋^n, H = (n), and G = (1). The action of (n) on M is the transpose of the standard representation. We identify $̋ with⊕via
a + b i + cj + dk = (a + b i) + (c - d i) j ↦ ( a + bi, c + di) ,
so that the action of(1)from the left preserves the complex structure given by the left multiplication byi. ThenMis identified with^n ⊕^nand the moment maps are
μ_(x,y) = i ( |x|^2 - |y|^2), μ_(x,y) = y^* x.
Suppose thatE →YandL →Yare pulled back from bundles overΣ, and so is the background connectionB. We form the associated bundle
𝕄 = (E × L × K^1/2) ×_SU(n) × U(1) M = (E ⊗ L^* ⊗ K^1/2) ⊕ (E^* ⊗ L ⊗ K^1/2).
Every circle-invariant sectionΨof𝕄can be written in the formΨ= (β, α)for
α∈Γ(Σ, E^* ⊗ L ⊗ K^1/2) and β∈Γ(Σ, E ⊗ L^* ⊗ K^1/2) .
The real moment map isμ_(β, α) = i( |β|^2 - |α|^2)and the complex moment map is
Γ(Σ, E^* ⊗ L ⊗ K^1/2 ) ×Γ(Σ, E ⊗ L^* ⊗ K^1/2) →Γ(Σ, K),
(α, β) ↦αβ.
Suppose that the closed one-formηused to perturb (<ref>) is pulled back fromΣ. By Remark <ref>, for circle-invariant configurations equation (<ref>) reduces to
{[ _ABα = 0,; _ABβ = 0,; αβ = 0,; i ∗ F_A + | α |^2 - |β|^2 - i ∗η = 0. ].
Now is a good point to introduce the space_Σappearing in Theorem <ref>.
Let Y = S^1 ×Σ be equipped with a spin structure pulled-back from Σ. Let E → Y be an (n)–bundle pulled-back from Σ.
We introduce the following Fréchet manifolds equipped with the C^∞–topology
_Σ the space of Riemannian metrics on Σ,
_Σ the space of closed imaginary-valued two-forms on Σ
_Σ := (Σ, E) ×_Σ the space of perturbations pulled-back fromΣ,
The space of parameters of the equation pulled-back from Σ is
_Σ := _Σ×_Σ.
We consider it as a subspace of the full parameter space introduced in <ref>.
Assume the situation described in Definition <ref>.
If (A, Ψ) is an irreducible solution of the Seiberg–Witten equation with multiple spinors (<ref>) with respect to a Riemannian metric g ∈_Σ and a parameter σ = (B,η) ∈_Σ, then
* L is pulled back from a bundle over Σ, and
* (A,Ψ) is gauge-equivalent to a circle-invariant configuration satisfying equation (<ref>).
§ A HOLOMORPHIC DESCRIPTION OF THE MODULI SPACE
The main result of this section, Theorem <ref> below, identifies the moduli space of Seiberg–Witten monopoles onY = S^1 ×Σwith a certain moduli space of holomorphic data onΣ.
§.§ Bryan–Wentworth moduli spaces
In the situation of Proposition <ref>, the moduli space^*=^*(g,σ)has the following description. Let
_Σ = (Σ, L) ×Γ(Σ, E^* ⊗ L ⊗ K^1/2) ×Γ(Σ, E ⊗ L^* ⊗ K^1/2).
Consider the subspace of_Σconsisting triples(A,α,β)satisfying equations (<ref>) and condition(α,β) ≠(0,0). The gauge group(Σ) = C^∞(Σ, S^1)acts freely on this subspace. By Proposition <ref> and Lemma <ref>, the quotient is homeomorphic to^*.
The next result extends the work of Bryan and Wentworth <cit.> who described Seiberg–Witten multi-monopoles on Kähler surfaces under the assumption that the background bundleEis trivial andBis the product connection. Before stating the theorem, we introduce
d = (L) := ⟨ c_1(L), [Σ] ⟩ and τ := ∫_Σi η/2π.
Ifd - τ< 0, then the last equation of (<ref>) forcesαto be non-zero for
0 = ∫_Σ{ i F_A - i η + (| α |^2 - | β |^2) vol_Σ} = 2π( d - τ ) + α_L^2^2 - β_L^2^2.
Likewise, ifd - τ> 0, thenβmust be non-zero. In both cases there are no reducible solutions and= ^*. Whend - τ= 0, either bothαandβare non-zero or both of them vanish yielding a reducible solution.
Recall that in dimension two every unitary connection equips the underlying vector bundle with a holomorphic structure. In particular,K^1/2andKare holomorphic line bundles.
Denote by _B →Σ the holomorphic (n,)–bundle obtained from E^* with the dual connection B^*. We will write simply in a context in which B is fixed.
If d - τ< 0, then ^* is isomorphic as real analytic spaces to the moduli space _ of triples (ℒ, α, β) consisting of
* a degree d holomorphic line bundle ℒ→Σ,
* holomorphic sections
α∈ H^0(Σ, _B ⊗ℒ⊗ K^1/2 ) and β∈ H^0(Σ, _B^* ⊗ℒ^* ⊗ K^1/2)
satisfying α≠ 0 and αβ = 0 ∈ H^0(Σ, K).
Two such triples (ℒ,α,β) and (ℒ', α', β') correspond to the same point in the _ if there is a holomorphic isomorphism ℒ→ℒ' mapping α to α' and β to β'.
The statement still holds when d - τ≥ 0 with the difference that for d - τ > 0 it is β instead of α that is required to be non-zero and for d - τ = 0 both α and β are required to be non-zero.
The next few paragraphs are occupied with a construction of_. A related construction was considered in <cit.>. The first three equations in (<ref>),
{[ _ABα = 0,; _ABβ = 0,; αβ = 0, ].
are invariant under the action of the complexified gauge group^c(Σ) := C^∞(Σ, ^*)of complex automorphisms ofL. The action ofg Σ→^*on(A,α,β) ∈_Σis given by
g(A, α, β) = (A + g^-1∂g - g^-1 g, g α, g^-1β).
In terms of the associated Dolbeault operators we have
[ _g(A)B = g _B A g^-1 on Γ(Σ, E^* ⊗ L ⊗ K^1/2),; _g(A)B = g^-1_BA g on Γ(Σ, E ⊗ L^* ⊗ K^1/2). ]
Consider the subspace of _Σ consisting of triples (A,α,β) satisfying equations (<ref>) and subject to the condition
{[ α≠ 0 if d - τ < 0,; β≠ 0 if d - τ > 0,; α≠ 0 and β≠ 0 if d - τ = 0. ].
We define _ to be the quotient of this subspace by the action of ^c(Σ). The points of _ parametrise the isomorphism classes of triples ( ℒ, α, β ) considered in Theorem <ref>.
The moduli space _ depends on the conformal class of the metric g on Σ, the holomorphic bundle _B, the degree d of L, and the sign of d - τ.
Employing the methods of subsection <ref>, one shows that_is metrisable, second countable, and has a natural complex analytic structure given by local Kuranishi models as in Remark <ref>. The discussion is almost the same as that for the Seiberg–Witten equation, so we only outline the details. To set up the Fredholm theory, consider the modified equation
{[ _ABα + i f β̅ = 0,; _ABβ - i f α̅ = 0,; αβ + ∂ f = 0. ].
A solution of (<ref>) is a quadruple(A, α, β, f)whereA,α, andβare as before andf ∈C^∞(Σ, ). The equation is elliptic modulo the action of^c(Σ). An analogue of Proposition <ref> is
If (A,α,β,f) is a solution of (<ref>) with (α,β) ≠ 0, then f = 0.
Using the linearisation of (<ref>) together with the complex Coulomb gauge fixing we represent_as the zero set of a Fredholm section. The local structure of the moduli space is encoded in the elliptic complex at a solution(A, α, β, 0):
Ω^0() rG^c_A,α,β Ω^0,1⊕Γ(E^* ⊗ L ⊗ K^1/2) ⊕Γ(E ⊗ L^* ⊗ K^1/2) ⊕Ω^0() r
rF_A,α,β Γ(E^* ⊗ L ⊗ K^-1/2) ⊕Γ(E ⊗ L^* ⊗ K^-1/2) ⊕Ω^1,0
whereG^c_A,α,βis the linearised action of the complexified gauge group
G^c_A,α,β(h) = (- h, hα, -hβ, 0),
andF_A,α,βis the linearisation of equations (<ref>)
F_A,α,β(a^0,1, u, v, t) =
(
[ _AB u + a^0,1α + i t β̅; _AB v - a^0,1β - i t α̅; u β + α v + ∂ t ]).
Even though the map given by the left-hand side of (<ref>) is not holomorphic, its derivativeF_A,α,βat a solution(A,α,β,0)is complex linear and so the cohomology groupsH^0_A,α,β,H^1_A,α,β,H^2_A,α,βare complex vector spaces. If the solution is irreducible, thenH^0_A,α,β = 0. We are left with complex vector spacesH^1_A,α,βandH^2_A,α,βof the same dimension. They have the following description—the proof is yet another variation of that of Proposition <ref>.
Let (A,α, β, f) be a solution of (<ref>) with (α, β) ≠ 0 and f = 0. Then the deformation space H^1_A,α,β is the quotient of the space of solutions
(a^0,1, u, v) ∈Ω^0,1() ⊕Γ(E^* ⊗ L ⊗ K^1/2 ) ⊕Γ(E ⊗ L^* ⊗ K^1/2),
{[ _AB u + a^0,1α = 0,; _AB v - a^0,1β = 0,; u β + α v = 0. ].
by the subspace generated by ( - h, hα, - hβ) for h ∈Ω^0(). The obstruction space H^2_A, α, β is canonically isomorphic to the dual space (H^1_A, α, β)^* as complex vector spaces.
The analytic structure on a neighbourhood of[A,α,β]in_is induced from a Kuranishi mapκH^1_A,α,β →H^2_A,α,β. Since the derivative ofF_A,α,βis complex linear at a solution,κcan be taken to be complex analytic which shows that_is a complex analytic space.
§.§ A homeomorphism between the moduli spaces
Since equation (<ref>) is part of (<ref>) and(Σ)is a subgroup of^c(Σ), every point of^*gives rise to a point in_.
The natural map ^* →_ is a homeomorphism.
The proof relies on a generalisation of a classical theorem of Kazdan and Warner.
Let X be a compact Riemannian manifold and let P, Q, and w be smooth functions on X with P and Q non-negative, and
∫_X P - Q > 0, ∫_X w > 0.
Then the equation
Δ u + P e^u - Q e^-u = w
has a unique solution u ∈ C^∞(X).
We refer to <cit.> for the proof of Lemma <ref>, but let us remark that it can be easily extended to the case when ∫ w = 0 and both P and Q are not identically zero (without the assumption on the sign of ∫ P - Q). First, observe that if we replace P and Q by P' = e^C P and Q' = e^-C Q respectively, then solving the corresponding equation
Δ u + P' e^u - Q' e^-u = w
is equivalent to solving the original equation with P and Q. Indeed, if u solves the former, then the function u + C is a solution the latter. Since both P and Q are not identically zero, their integrals are positive and by choosing the constant
C = 1/2log( ∫_X Q /∫_X P),
we can guarantee that ∫ P' - Q' = 0. Thus, we may as well assume that this holds for the original functions P and Q. After this adjustment, we simply repeat the proof from <cit.>. The only difference is the construction of sub- and super-solutions, that is functions u_- and u_+ satisfying u_- ≤ u_+ and
Δ u_- + Pe^u_- - Q e^-u_- - w ≤ 0,
Δ u_+ P e^u_+ - Q e^-u_+ - w ≥ 0.
Let v_1 and v_2 be solutions of Δ v_1 = w and Δ v_2 = - P + Q. Choose a constant M such that M ≥sup | v_1 + v_2| and set
u_+ = v_1 + v_2 + M and u_- = v_1 + v_2 - M.
Then clearly u_- ≤ u_+ and
Δ u_- + Pe^u_- - Q e^-u_- - w = P ( e^v_1 + v_2 - M -1 ) + Q (1- e^-v_1 - v_2 + M) ≤ 0,
Δ u_+ + Pe^u_+ - Q e^-u_+ - w = P( e^v_1 + v_2 + M - 1) + Q ( 1 - e^-v_1 - v_2 -M) ≥ 0.
The remaining part of the proof from <cit.> goes through unchanged.
It is clear that the map ^* →_ is continuous, so it remains to construct a continuous inverse _→^*.
Let (A, α, β) be a solution of (<ref>). As in <cit.>, we seek h ∈^c(Σ) such that h (A, α, β) = (A', α', β') satisfies also the third equation of (<ref>). We can assume h = e^f for f Σ→. We have
(A', α', β') = (A - f + ∂ f, e^f α, e^-fβ)
so the curvature of A' is
F_A' = F_A - 2 ∂ f = F_A - i ∗Δ f,
where Δ is the positive definite Hodge Laplacian. Thus, (<ref>) for (A', α', β') is equivalent to
0 = i ∗ F_A' + |α'|^2 - |β'|^2 - iη
= Δ f + e^2f | α |^2 - e^-2f | β |^2 + i (∗ F_A - η).
Assume d - τ< 0 and set P = | α |^2, Q = | β|^2 and w = -i ∗ (F_A - η). We need to solve
Δ f + P e^2f - Q e^-2f = w.
If d - τ < 0, then α is assumed to be non-zero. After applying a gauge transformation of the form h = e^C for C constant, we may assume that
∫_Σ (P-Q) = ∫_Σ | α |^2 - | β|^2 > 0.
Moreover, we have
∫_Σ w = - ∫_Σ( i F_A - η) = -2π (d - τ) > 0.
The hypotheses of Lemma (<ref>) are satisfied and there is a unique solution f ∈ C^∞(Σ) to (<ref>). This shows that there exists h ∈^c(Σ), unique up to an element of (Σ), mapping (A,α, β) to a solution of (<ref>). If d - τ> 0 the proof is the same with P = | β |^2, B = |α |^2, w = i ∗( F_A - η) and f replaced in the equation by -f. The case d = 0 follows from Remark <ref>.
This gives us an inverse to ^* →_; it remains to show that it is continuous. Let [A_i, α_i, β_i] be a convergent sequence of points in _. Let (A_i', α_i', β_i') be the corresponding solutions of (<ref>). There is a sequence h_i = u_i e^f_i such that h_i (A_i', α_i', β_i') converges in _Σ. The functions f_i satisfy (<ref>) with coefficients P_i, Q_i, w_i converging in C^∞(Σ). It follows from the proof of <cit.> that for every k there is a C^k bound for f_i, independent of i. By the Arzelà–Ascoli theorem, after passing to a subsequence, f_i converges in C^∞(Σ). It follows that [A_i',α_i',β_i'] converges in ^*, which proves the continuity of _→^*.
§.§ Proof of Theorem <ref>
It remains to compare the deformation theories of the two moduli spaces to show that the homeomorphism^* →_is an isomorphism of real analytic spaces.
^* is isomorphic to the moduli space _Σ of solutions of (<ref>).
Let_Σbe the space of(Σ)–orbits of triples
(A, α, β) ∈(Σ,L) ×Γ(Σ, E^* ⊗ L ⊗ K^1/2) ×Γ(Σ, E ⊗ L^* ⊗ K^1/2)
satisfying(α,β) ≠0and
{[ _ABα = 0,; _ABβ = 0,; αβ = 0,; i ∗ F_A + | α |^2 - |β|^2 - i ∗η = 0 ].
Endow_Σwith a real analytic structure using local Kuranishi models[As in the constructions of and _ one introduces an extra term f ∈ C^∞(Σ, i) to make the equation elliptic modulo gauge.We can ignore this because analogues of Proposition <ref> and Lemma <ref> hold.]. Let^*_Σand^*be the spaces of irreducible configurations(A,Ψ)overΣandYrespectively. It follows from Proposition <ref> and Lemma <ref> that the inclusion
ℬ^*_Σ = ^*_Σ / (Σ) ↪^* / (Σ) = ℬ^*
induces a homeomorphism_Σ →^*. The Seiberg–Witten moduli space^*is, at least locally, given as the zero set of a Fredholm sectionsof a bundle overℬ^*. On the other hand, the restriction ofstoℬ^*_Σgives a Fredholm section defining_Σ. In order to show that the induced real analytic structures agree we need to prove
H^1_A,Ψ = ds_(A,Ψ) = d (sℬ^*_Σ)_(A,Ψ)
for every[A, Ψ] ∈_Σ = ^*. The corresponding equality of cokernels follows then from the natural isomorphism betweenH^1_A,ΨandH^2_A,Ψ(and likewise for the equations overΣ).
Equality (<ref>) is the linearised version of Theorem <ref>. Let(A,Ψ)be a circle-invariant solution. According to Lemma <ref>,H^1_A,Ψ = S_A,Ψ ⊕G_A,Ψ^*, whereS_A,Ψis the linearisation of the equation without the extra termfandG_A,Ψthe infinitesimal gauge group action. Using Proposition <ref>, we identifyH^1_A,Ψwith the space of pairs
(a(t) + b(t) dt, ϕ(t)) ∈Γ(S^1 ×Σ, Λ^1(i ) ⊕Λ^0( i ) ⊕ (E^* ⊗ S ⊗ L ) )
satisfying
{[ i ( ∂ϕ/∂ t + b Ψ) + √(2)σ( ∂_ABϕ + a^1,0Ψ) = 0,; ∂ a^1,0/∂ t + ∂ b - i μ_(Ψ, ϕ) = 0,; ∗ da + 2μ_ ( Ψ, ϕ) = 0,; - d^* a - ∂ b/∂ t + i ⟨Ψ, ϕ⟩ = 0. ].
Equality (<ref>) will be established by showing that any solution(a + bdt, ϕ)satisfies
∂ a/∂ t = 0, ∂ϕ/∂ t = 0, b = 0.
This is done in the same way as in the proof of Theorem <ref>. First, apply∂/ ∂tto the first two equations, then get rid of the terms∂ϕ/ ∂t,∂b / ∂t, and∂a^1,0 / ∂t. This results in
- ∂^2 ϕ/∂ t^2 + 2 ∂_AB^* ∂_ABϕ - i ⟨Ψ, ϕ⟩Ψ + d^* a ·Ψ + 2 (∂^* a^1,0 ) Ψ + √(2)σμ_(Ψ, ϕ) Ψ = 0,
- ∂^2 a^1,0/∂ t^2 + ∂∂^* a^1,0 + i ∂⟨Ψ, ϕ⟩ + √(2)μ_ ( Ψ, σ∂ϕ) + √(2)μ_ ( Ψ, σ a^1,0Ψ ) = 0.
Take the realL^2–product of the first equation withϕand the second equation witha^1,0. Integrating by parts as in the proof of Theorem <ref>, we obtain
∂ϕ/∂ t^2_L^2 + ∂ a/∂ t^2_L^2 + 2 ∂_ABϕ + a^1,0Ψ_L^2^2 + - d^*a + i ⟨Ψ, ϕ⟩^2_L^2 + √(2)μ_(Ψ, ϕ) _L^2^2 = 0.
We have used identity (<ref>) to relateμ_to the inner product. Thus, we have proved thatb = 0,ϕandaare pulled-back fromΣand satisfy
{[ ∂_ABϕ + a^1,0Ψ = 0,; μ_(Ψ, ϕ) = 0,; ∗ da + 2μ_ ( Ψ, ϕ) = 0,; - d^* a - i ⟨Ψ, ϕ⟩ = 0. ].
Recall that with the conventions of subsection <ref> we identifyΨwith a pair(α, β). After a conjugation equation (<ref>) translates to the following equation foraandϕ = (u,v){[ _AB u + a^0,1α = 0,; _AB v - a^0,1β = 0,; α v + u β = 0,; ∗ i da + 2 ⟨α, u ⟩ - 2 ⟨β, v ⟩ = 0,; - d^* a - i ⟨α, u ⟩ - i ⟨β, v ⟩ = 0. ].
This is the linearisation of (<ref>) together with the Coulomb gauge fixing condition. We conclude that (<ref>) holds and^*is isomorphic to_Σas real analytic spaces.
_Σ is isomorphic to _.
The proof is similar to that of <cit.>, so we only outline the argument. As before, the main point is to show an isomorphism of the deformation spaces for_Σand_. The former is given by (<ref>) and the latter consists of solutions of the first three equations together with a choice of a local slice for the action of^c(Σ). The Lie algebra of^c(Σ)splits is the direct sum ofC^∞(Σ, )and the Lie algebra of(Σ). Under this splitting, we can choose a slice of^c(Σ)–action imposing the standard Coulomb gauge condition for(Σ), which is the last equation of (<ref>), together with a choice of a slice for the action ofC^∞(Σ,):
e^f (A, α, β) = (A + ∂ f - f, e^f α, e^-fβ).
The linearisation of this action at(A,α,β)is
f ↦ (- f + ∂ f, f α, -f β).
A local slice for the action ofC^∞(Σ,)can be obtained from any subspace of
Ω^1(i) ⊕Γ(E^* ⊗ L ⊗ K^1/2) ⊕Γ(E ⊗ L^* ⊗ K^1/2)
which is complementary to the image of (<ref>). Hence, to show that the deformation spaces of_Σand_are isomorphic it is enough to prove that the subspace given by
i ∗ da + 2 ⟨α, u ⟩ - 2 ⟨β, v ⟩ = 0
is complementary to the image of (<ref>). In other words, we need to know that for any triple(a,u,v)there is a unique functionf ∈C^∞(Σ,)such that
0 = i ∗ d(a - f + ∂ f) + 2 ⟨α, u + fα⟩ - 2 ⟨β, v - f β⟩
= {Δ + 2( | α |^2 + | β |^2) } f + i ∗ da + 2 ⟨α, u ⟩ - 2 ⟨β, v ⟩.
This is true because(α, β) ≠0and so the operatorΔ+ 2 ( |α|^2 + | β|^2)is invertible onC^∞(Σ,). In the same way as in <cit.> we conclude that (<ref>) provides a local Fredholm model for both_Σand_and so the two spaces have isomorphic analytic structures.
§ A TALE OF TWO COMPACTIFICATIONS
The goal of this section is to define natural compactifications of^*and_and to extend the isomorphism^* ≅_to a homeomorphism between these compactifications, thus completing the proof of Theorem <ref>. We assumed - τ< 0so in particular= ^*. The discussion can be easily adapted to the casesd - τ= 0andd - τ> 0.
§.§ A complex-geometric compactification_has a natural compactification analogous to the one described in <cit.>. Consider the subspaceS ⊂_Σ ×given by
S := { (A, α, β, t) | (A,α,β) satisfies equations (<ref>), α≠ 0, and (β, t) ≠ (0,0) }.
The group^c(Σ) ×^*acts freely onSby the standard action of the first factor on_Σand
λ(A, α, β, t) = (A, α, λβ, λ t) for λ∈ℂ^×.
We define _ to be the quotient of S by ^c(Σ) ×^*.
This is analogous to compactifying^Nbyℂℙ^Nwhich is the quotient of(^N ×) ∖{ (0,0) }by the free action of^*; in fact,_is obtained by applying this construction fibrewise.
Let be the subspace of _ consisting of triples of the form (A, α, 0). Equivalently, is the space of ^c(Σ)–orbits of pairs (A,α) satisfying _ABα = 0 and α≠ 0.
We will see momentarily thatis compact. The natural projection(A, α, β) ↦(A, α)induces a surjective mapπ_ →. Let[A, α] ∈and denote byℒ_Athe holomorphic structure onLinduced byA. The fibreπ^-1([A, α])is the kernel of the homomorphism
H^0(Σ, _B^* ⊗ℒ_A ⊗ K^1/2) rα H^0(Σ, K)
given by pairing withα. The compactification_is obtained by replacing each fibreαwith the projective spaceℙ( α⊕)containing it.
The space _ is metrisable, compact, and contains _ as an open dense subset. Moreover, the complex analytic structure on _ extends to a complex analytic structure on _ with respect to which _ is Zariski open.
It is clear that _ is metrisable and _⊂_ is open and dense. In order to show that _ is compact, consider a sequence [A_i, α_i, β_i, t_i] ∈_; we need to argue that there are sequences h_i ∈^c(Σ) and λ_i ∈^* such that after passing to a subsequence h_i λ_i (A_i, α_i, β_i, t_i) converges smoothly in S. This is the content of Step 1 in <cit.>.
Let _Σ^* ⊂_Σ be the subset of configurations (A, α, β) with α≠ 0. The group ^c(Σ) acts freely on _Σ^* with quotient ℬ_Σ^*, a complex Banach manifold. There is a holomorphic vector bundle 𝒲→ℬ_Σ^* such that _ is the zero set of a holomorphic Fredholm section 𝒮ℬ_Σ^* →𝒲. The zero set of such a section carries a natural complex analytic structure <cit.>. The complex analytic structure on _ is extended to _ by extending 𝒮 to a Fredholm section whose zero set is _. Replace _Σ^* by the subspace of _Σ× consisting of quadruples (A, α, β, t) for which α≠ 0 and (β, t) ≠ (0, 0). Let ℬ_Σ^* be the quotient of this space by the action of ^c(Σ) ×^*: it contains ℬ_Σ^* as an open subset. Let 𝒲→ℬ^*_Σ be the vector bundle obtained as the quotient of 𝒲×^* by the lifted action of ^c(Σ) ×^*. There is a holomorphic Fredholm section 𝒮 extending 𝒮 so that _ = 𝒮^-1(0). When restricted to the open subset ℬ_Σ^*, this reduces to the construction of _ described above, so the inclusion _⊂_ is compatible with the induced analytic structures. Moreover, _∖_ is the intersection of 𝒮^-1(0) with the analytic subset ℬ_Σ^*∖ℬ_Σ^* given by the equation t = 0. We conclude that _∖_ is an analytic subset of _, and so _ is Zariski open.
⊂_ is compact. Furthermore, _ is compact if and only if = _.
consists of equivalence classes [A, α, β] for which β = 0; it is compat by Step 1 in <cit.>. If = _, then _ is compact. To prove the converse, observe that if _ is non-compact, then _∖_ is non-empty by Proposition <ref>. On the other hand, _∖_ consists of ^c(Σ) ×^*–orbits of the form [A, α, β, 0] with β≠ 0 so every element of _∖ gives rise to an element of _∖_.
§.§ A gauge-theoretic compactification
For an arbitrary three-manifold a good compactification ofis yet to be constructed—see <cit.> for a discussion of analytical difficulties involved in such a construction.
ForY = S^1 ×Σwe can overcome these obstacles thanks to a refined compactness theorem <cit.>.
For the remaining part of the paper we make the assumption that E is an (2)–bundle. In this case the description of Fueter section simplifies <cit.>, <cit.>. The discussion below should easily generalise to the higher rank case.
If (A_i,Ψ_i = (α_i, β_i)) is a sequence of solutions of (<ref>) with Ψ_i _L^2→∞, then after passing to a subsequence and applying gauge transformations (A_i, Ψ_i / Ψ_i _L^2) converges in C^∞_ on the complement of a finite set D = { x_1, …, x_N }. The limiting configuration (A, Ψ = (α, β)) is defined on Σ∖ D and satisfies
* Ψ_L^2 = 1 and | Ψ | > 0 on Σ∖ D,
* _ABα = 0, _ABβ = 0, αβ = 0, and | α | = | β| on Σ∖ D.
* A is flat on Σ∖ D and has holonomy contained in _2.
* There are non-zero integers a_1, …, a_N such that ∑_k=1^N a_k = 2d and
∗i/2π F_A_i⟶1/2∑_k=1^N a_k δ_x_k
in the sense of measures.
* For each k=1, … , N we have
| Ψ (x) | = O( dist(x_k,x)^|a_k|/2 ).
Let D ⊂Σ be a finite set. We say that a gauge transformation in (Σ∖ D) or ^c(Σ∖ D) is simple if it has degree zero around each point of D. Denote by _0(Σ∖ D) and ^c_0(Σ∖ D) the subgroups of simple gauge transformations.
Let D ⊂Σ be a finite subset. With any flat connection A ∈(Σ∖ D, L) we associate a measure i ∗ F_A on Σ as follows. For x ∈ D let B ⊂Σ be a small disc centred at x and not containing other points of D. In a unitary trivialisation of LB we have A = d + a for a one-form a ∈Ω^1( B ∖{ x }, i ). Denote
q_x = ∫_∂ B ia.
The measure i ∗ F_A is defined by
i ∗ F_A := ∑_x ∈ D q_x δ_x.
One easily checks that i∗ F_A is well-defined and invariant under simple gauge equivalences.
A limiting configuration is a triple (A, Ψ, D) comprising of a finite subset D = { x_1, … , x_N }⊂Σ, a connection A ∈(Σ∖ D,L), and a pair Ψ = (α, β) of nowhere-vanishing sections α∈Γ( Σ∖ D, E^* ⊗ L ⊗ K^1/2) and β∈Γ(Σ∖ D, E ⊗ L^* ⊗ K^1/2) satisfying
* Ψ_L^2 = 1 and |Ψ| > 0 on Σ∖ D
* _ABα = 0, _ABβ = 0, αβ = 0, and | α | = | β| on Σ∖ D.
* A is flat on Σ∖ D and has holonomy contained in _2.
* There are non-zero integers a_1, …, a_N such that ∑_k=1^N a_k = 2d and
∗i/2π F_A = 1/2∑_k=1^N a_k δ_x_k as measures.
* For each k=1, … , N we have
| Ψ (x) | = O( dist(x_k,x)^|a_k|/2 ).
(A,Ψ,D) and (A',Ψ',D') are simple gauge equivalent if D = D' and they differ by an element u ∈_0(Σ∖ D). Let ℱ be the set of simple gauge equivalence classes of limiting configurations.
The spaceℱcan be equipped with a natural topology in which a sequence[A_i, Ψ_i, D_i]converges to[A,Ψ,D]if and only ifi ∗F_A_i' →i ∗F_Aweakly as measures and after applying a sequence in_0(Σ∖D)we haveA_i →AandΨ_i →ΨinC^∞_onΣ∖D.
Let [A,Ψ,D] be an equivalence class in ℱ. For ϵ > 0, δ > 0, an integer k ≥ 0, and a continuous function f Σ→ we define V_ϵ,δ,k,f(A,Ψ,D) ⊂ℱ to be the set of the elements of ℱ which have a representative (A',Ψ',D') satisfying
* D' ⊂ D_ϵ where D_ϵ := { x ∈Σ | dist(x, D) < ϵ},
* A' - A _C^k(Σ∖ D_ϵ) < δ,
* Ψ' - Ψ_C^k(Σ∖ D_ϵ) < δ.
* | ∫_Σ (i ∗ F_A') f - ∫_Σ ( i ∗ F_A) f | < δ.
The family of subsets
{ V_ϵ,δ,k,f(A,Ψ,D) }
forms a base of a Hausdorff topology on ℱ.
The proof is a simple application of <cit.>.
The next step is to combineandℱinto one topological space. For this purpose it is convenient to identify points ofwith gauge equivalence classes of triples(A, Ψ, t), wheret ∈(0, ∞), Ψ_L^2 = 1, and
{[ _ABΨ = 0,; t^2 F_A = μ(Ψ). ].
The map(A, Ψ, t) ↦(A, t^-1 Ψ)gives a homeomorphism between the space of such classes and the moduli space^*. Recall that in our setting there are no reducibles so there is no boundary att →∞. The boundary att →0is obtained by gluing in the space of limiting configurations.
Let [A,Ψ,D] be an equivalence class in ℱ. For ϵ > 0, δ > 0, an integer k ≥ 0, and a continuous function f Σ→ we define
W_ϵ, δ, k, f(A,Ψ,D) ⊂
to be the set of the elements of that have a representative (A', Ψ', t) satisfying
* t < δ,
* A' - A _C^k(Σ∖ D_ϵ) < δ,
* Ψ' - Ψ_C^k(Σ∖ D_ϵ) < δ,
* | ∫_Σ ( i ∗ F_A')f - ∫_Σ (i ∗ F_A)f | < δ.
The compactified moduli space is
∪ℱ
equipped with the topology whose basis are the subsets
W_ϵ,δ,k,f(A,Ψ,D) W_ϵ, δ, k,f(A,Ψ,D) ∪ V_ϵ, δ, k,f(A,Ψ,D).
Let be a base of the topology on . The family of subsets
{W_ϵ,δ,k(A,Ψ,D) }∪
forms a base of a Hausdorff topology on .
§.§ A homeomorphism at infinity
The main ingredient in the proof of Theorem <ref> is
The spaces _∖_ and ℱ are homeomorphic.
The proof of Proposition is preceded by auxiliary results about limiting configurations. The first of them is a complex-geometric analogue of the statement that a limiting configuration induces a flat connection with_2holonomy.
Let (A, α, β) be a solution of (<ref>) with α≠ 0 and β≠ 0. Denote by ℒ the holomorphic line bundle (L, _A). Let D_1 and D_2 be the zero divisors of α and β respectively, and (D_1 - D_2) the holomorphic line bundle associated to D_1 - D_2. There is a canonical holomorphic isomorphism
φ_αβ(D_1 - D_2) ⟶ℒ^2.
Recall that α is a holomorphic section of ⊗ℒ⊗ K^1/2 and β is a holomorphic section of ^* ⊗ℒ^* ⊗ K^1/2. Since αβ = 0 and the rank of is two, we have the short exact sequence
0 r ℒ^-1⊗ K^-1/2⊗(D_1) rα rβ ℒ^-1⊗ K^1/2⊗(-D_2) r 0.
The associated isomorphism of the determinant line bundles is
ℒ^-2⊗(D_1 - D_2) ≅≅,
where the last isomorphism follows from the fact that is a holomorphic SL(2,)-bundle. Tensoring both sides with ℒ^2, we obtain the desired isomorphism φ_αβ. It is canonically determined by α and β.
The lemma below provides an upper bound on the number of components, counted with multiplicities, of the singular set of a limiting configuration.
There exists M ≥ 0, depending only on the holomorphic bundle , with the following significance. If (ℒ, α, β) is a triple as in Lemma <ref>, then
D_1 + D_2 ≤ M.
Tensoring exact sequence (<ref>) with K^1/2 we see that ℒ^-1⊗(D_1) is a holomorphic subbundle of ⊗ K^1/2. It is an elementary fact that the degrees of line subbundles of a given holomorphic bundle are bounded above <cit.>. In fact, if ℱ is a holomorphic vector bundle and A ⊂ℱ a line subbundle, then
A ≤ h^0(Σ, ℱ) + 2g(Σ)-2
where g(Σ) is the genus of Σ. (We will use this bound later.) Thus, we have an upper bound on the degree of ℒ^-1⊗(D_1). On the other hand, ℒ^2 is isomorphic to (D_1 - D_2), so
( ℒ^-1⊗(D_1) ) = - ℒ + D_1 = 1/2( D_1 + D_2 ),
which proves the lemma.
The next result will be useful in proving the convergence of measures.
Let f Σ→ be a continuous function, γ > 0, and D ⊂Σ a finite subset. Then there exist ϵ> 0 and δ > 0 with the following property. Suppose that D' ⊂Σ is another finite subset, and A and A' are two flat connections over Σ∖ D and Σ∖ D' respectively, satisfying
* D' ⊂ D_ϵ,
* A' - A _C^0(Σ∖ D_ϵ) < δ,
* the measures ∗ i F_A and ∗ i F_A' have integer weights.
Then we have
| ∫_Σ ( i∗ F_A' )f - ∫_Σ (i ∗ F_A)f | ≤γ i ∗ F_A'.
where i ∗ F_A' is the total variation of the measure i ∗ F_A' given by
i ∗ F_A' = ∑_x ∈ D' | q_x |
for i ∗ F_A' = ∑_x ∈ D' q_x δ_x .
Let D = { x_1, …, x_N } and a_1, …, a_N be the integer weights of the measure i ∗ F_A as in Definition <ref>. Suppose that ϵ is small enough so that the discs B_i of radius ϵ centred x_i are pairwise disjoint. Partition the set D' into disjoint subsets D_1', …, D_N' consisting of points within ϵ–distance from x_1, …, x_N respectively. For each i choose small disjoint discs E_i1, E_i2, … centred at points of D_i' and contained in B_i. Let b_i1, b_i2… be the weights of the points in D_i' in the measure i ∗ F_A'.
In a unitary trivialisation of L over each B_i we have
A = d + a and A' = d + a',
where one-form a is defined on B_i ∖{ x_i } and a' is defined on B_i ∖ D_i'. By the hypothesis of the lemma a - a' _C^0(∂ B_i) < δ. Thus, for sufficiently small δ,
| a_i - ∑_j b_ij| =| ∫_∂ B_i i a - ∑_j ∫_∂ E_ij ia' | =
| ∫_∂ B_i ia - ∫_∂ B_i ia' | < 1.
Since all numbers a_i, b_ij are integers, so we conclude that
a_i = ∑_j b_ij.
For each i denote the points of D_i' by { x_ij}. Then
| ∫_Σ ( i∗ F_A' )f - ∫_Σ (i ∗ F_A)f | = | ∑_i a_i f(x_i) - ∑_i j b_ij f(x_ij) |
≤∑_ij |b_ij| | f(x_i) - f(x_ij) |.
By the continuity of f we can choose ϵ sufficiently small so that
sup_x ∈ B_i | f(x_i) - f(x) | ≤γ
for all i = 1, …, N. Then
| ∫_Σ ( i∗ F_A' )f - ∫_Σ (i ∗ F_A)f | ≤γ i ∗ F_A'.
The last lemma allows us to extend a limiting configuration to a holomorphic section. The proof is a minor variation of <cit.>.
Let L be a Hermitian line bundle over the unit disc B ⊂. Suppose that A is a unitary connection on LB ∖{ 0 } and φ a section of L over B ∖{ 0 } satisfying _Aφ = 0 and | φ | = 1. Denote by φ the degree of φ∂ B. Then
* F_A = 0 on B ∖{ 0 } and i ∗ F_A =( 2πφ )δ_0 as measures.
* There exists a complex gauge transformation h B ∖{ 0 }→^* such that h has degree zero around zero and in some trivialisation of L around zero h(A) is the trivial connection and hφ = z^k.
Set 𝒳 = _∖_. We will construct a continuous bijection 𝒳→ℱ. Since the domain is compact by Proposition <ref> and the target space is Hausdorff by Lemma <ref>, such a map is necessarily a homeomorphism.
The construction of 𝒳→ℱ.
Let [A, α, β, t] ∈𝒳. By definition, t=0, α≠ 0, β≠ 0, and (<ref>) is satisfied. Let D_1, D_2 be the zero divisors of α, β, respectively. We will interchangeably consider them as divisors or as subsets of Σ. Set D = D_1 ∪ D_2. We claim that there is a simple complex gauge transformation h ∈^c_0(Σ∖ D) such that (hA, hα, h^-1β) is a limiting configuration. A necessary condition is
| h α | = | h^-1β | on Σ∖ D.
A transformation satisfying (<ref>) exists since α and β are both non-zero on Σ∖ D and we can set h = √( | β| / | α| ); any other choice of h will differ from that one by an element of _0(Σ∖ D).
The map 𝒳→ℱ is defined by [A,α,β,0] ↦ (A', α', β') := (h(A), hα, h^-1β). We need to show that (A', α', β') represents a class in ℱ. We clearly have
{[ _A'Bα' = 0,; _A'Bβ' = 0,; | α' | = | β' |. ].
Moreover, for Ψ' := (α', β')
| Ψ' | = √( | α' |^2 + | β' |^2 ) = √( | h|^2 |α|^2 + |h|^-2 | β |^2 ) = √( 2 | α | | β |).
Integrating over Σ yields
Ψ' _L^2 = √(∫_Σ 2 | α | | β | ).
After rescaling β, which does not change the class of [A,α,β,0]∈𝒳, we can assume Ψ_L^2 = 1. We also see that | Ψ | > 0 on Σ∖ D and in a neighbourhood of every x ∈ D
| Ψ(y) | = O( dist(y,x)^ord_x(α) + ord_x(β)/2),
where ord_x(α) and ord_x(β) denote the order of vanishing of α and β at x.
It remains to prove that A' is flat and the measure i ∗ F_A' is as in Definition <ref>. Let φ_αβ(D_1 - D_2) → L^2 be the A–holomorphic isomorphism from Lemma <ref>. The construction of Lemma <ref> is local, so we can also define an analogous map φ_α' β' corresponding to sections α' and β'. Since they are both nowhere vanishing, φ_α' β' is an A'–holomorphic isomorphism of the trivial bundle over Σ∖ D and L^2Σ∖ D; thus, φ_α' β' is a nowhere vanishing A'–holomorphic section of L^2 over Σ∖ D. Moreover, on Σ∖ D
| φ_α' β' | = | α' | / | β' | = 1 and φ_α' β' = h^2 φ_αβ
By Lemma <ref>, the tensor product connection A ⊗ A on L^2 is flat; so A itself is flat and for every x ∈ D the weight of the measure i/2π∗ F_A at x is equal to half of the degree of φ_α' β' around x. Since h has degree zero around each point x ∈ D, the degrees of φ_αβ and φ_α' β' around x agree. Denote this degree by q_x ∈. Since the zero divisor of φ_αβ is D_1 - D_2,
∑_x ∈ D q_x = D_1 - D_2 = ( L^2) = 2d.
Finally, observe that for every x ∈ D we have
q_x = ord_x( α )- ord_x (β) ≤ord_x (α) + ord_x (β).
Together with (<ref>) this shows that (A', α', β', D) is a limiting configuration. It is easy to check that if we replace (A,α,β,0) by a different quadruple in the same orbit of the ^c(Σ) ×^*–action, then the resulting limiting configurations are simple gauge-equivalent.
𝒳→ℱ is injective.
Suppose that (A_1, α_1, β_1, 0) and (A_2, α_2, β_2, 0) give rise to limiting configurations that are simple gauge equivalent. In particular, they have the same singular set, D say. Suppose that β_1 and β_2 are scaled so that
∫_Σ 2 | α_1 | | β_1 | = ∫_Σ 2 | α_2 | | β_2 | = 1.
Composing the simple gauge equivalence of the limiting configurations with complex gauge transformations satisfying (<ref>), we obtain t ∈^c_0(Σ∖ D) such that on Σ∖ D
A_2 = t(A_1), α_2 = t α_1, β_2 = t^-1β_1, φ_α_2β_2 = t^2 φ_α_1β_1.
Even though t is not defined at the points of D, the holomorphic data is. In particular, φ_α_1 β_1 and φ_α_2 β_2 have zeroes at every point of D. Moreover, the zeroes are of the same order—this is equivalent to the measures of the corresponding limiting configurations being equal. We conclude that t is bounded around D. Since it is also (A_1,A_2)–holomorphic it extends to a holomorphic isomorphism between (L, _ A_1) and (L, _A_2) and so (A_1, α_1, β_1, 0) and (A_2, α_2, β_2, 0) give rise to the same point in 𝒳 and the map 𝒳→ℱ is injective.
𝒳→ℱ is surjective.
Let (A', Ψ' = (α', β'), D) ∈ℱ. We need to find h ∈^c_0(Σ∖ D) such that (A, α, β) (h(A'), hα', h^-1β') extends smoothly to the whole of Σ. Furthermore, we should have D = D_1 ∪ D_2 where D_1 and D_2 are the zero divisors of α and β respectively.
Let φ_α' β'∈Γ(Σ∖ D, L^2) be as before. Then _A'φ_α' β' = 0 and | φ_α' β'| = 1 on Σ∖ D. By Lemma <ref>, applied to the connection A' ⊗ A' and section φ_α' β', there exists k ∈^c_0(Σ∖ D, L) such that C k(A' ⊗ A') extends to a connection on a line bundle T →Σ and k φ_α' β' extends to a meromorphic section of (T, _C). We claim that T =L^2 as unitary bundles, that k = h^2 for some h ∈^c_0(Σ∖ D), and that C = A ⊗ A for A = h(A'). This follows from the assumption on the measure i ∗ F_A' induced by the limiting configuration (A', α', β', D); by Lemma <ref> for every point x ∈ D the meromorphic section h^2 φ_α' β' vanishes to the order q_x defined by
i/2π F_A' = 1/2∑_x ∈ D q_x δ_x.
(x is a pole of order | q_x| if q_x < 0.) The claim is then a consequence of the assumption ∑_x ∈ D q_x = 2d = ( L^2) and the fact that k has degree zero around the points of D. Thus, A = h(A') extends. We need to show that α = h α' and β = h^-1β' extend. Observe that
| α | | β | = | α' | | β' | = 1/2 | Ψ' |^2,
where we have used | α' | = | β'| and Ψ' = (α', β'). As a consequence, for every x ∈ D
|α(y) | | β(y) | = O( dist(x,y)^|q_x|).
On the other hand, we have
| α | / | β | = | h^2 | | α' | / | β' | = | h^2 φ_α' β'|,
so around x ∈ D
| α (y) | / | β(y) | = O ( dist(x,y)^q_x).
We conclude that
| α(y) | = O( dist(x,y)^ |q_x| + q_x/2) and | β(y) | = O( dist(x,y)^ |q_x| - q_x/2),
which shows that both α and β are bounded over Σ∖ D. Since they are also holomorphic, they extend to globally defined sections. Hence, (A, α, β, 0) represents a point in 𝒳 corresponding to (A', α', β', D) under 𝒳→ℱ.
𝒳→ℱ is continuous.
Suppose that [A_i, α_i, β_i] → [A, α, β] in 𝒳. Let (A_i', α_i', β_i', D_i) and (A', α', β', D) be the corresponding points in ℱ. We will prove that (A_i', α_i', β_i', D_i) converges to (A', α', β', D) as limiting configurations. We easily check that the points of D_i concentrate around D and modulo simple gauge transformations
A_i' → A', α_i' →α', β_i' →β'
in C^∞_ on Σ∖ D. By Lemma <ref>, i ∗ F_A_i'→ i ∗ F_A' as measures provided that the sequence of total variations i ∗ F_A_i' is bounded. i ∗ F_A_i' is up to a constant equal to the degree of D_1^i + D_2^i where D_1^i and D_2^i are the zero divisors of α_i and β_i. By Lemma <ref> this degree is bounded above. We conclude that [A_i', α_i', β_i', D_i] → [A', α', β', D] in ℱ.
§.§ Proof of Theorem <ref>
We construct a bijective map_ →from the homeomorphisms_ →from Theorem <ref> and_ ∖_ →ℱfrom Proposition <ref>. This map is continuous when restricted toand its complement. It remains to show that it is continuous; indeed,_is compact by Proposition <ref> andis Hausdorff by Lemma <ref>, so a continous bijection_ →is a homeomorphism.
Let(A_i, α_i, β_i, t_i)be a sequence representing points in_and(A_i', Ψ_i', t_i')the corresponding sequence of solutions of (<ref>). Suppose thatt_i →0and(A_i, α_i, β_i)converges inC^∞to(A, α, β)withα≠0andβ≠0. This limit represents an element of _ ∖_and thus corresponds to a limiting configuration(A',Ψ', D). We need to show that after applying gauge transformations the sequence of Seiberg–Witten solutions(A_i', Ψ_i', t_i')converges in the sense of Definition <ref> to a limiting configuration which is simple gauge-equivalent to(A', Ψ', D). By Theorem <ref>, the sequence converges and by Proposition <ref>, the limiting configuration is simple gauge-equivalent to(A',Ψ',D). This shows that the map_ →is continuous.
§ FUETER SECTIONS AND COMPLEX GEOMETRY
The main result of this section will ensure the compactness of(g,σ)for any product metricgonY=S^1×Σandσgeneric among the parameters pulled-back fromΣ.
* For every product Riemannian metric g on Y = S^1 ×Σ there is a residual subset (g) ⊂(Σ, E) with the property that if B ∈(g), then there exist no Fueter sections with respect to (g,B).
* Let (g_t)_t∈[0,1] be a path of product metrics and B_0 ∈(g_0), B_1 ∈(g_1). For a generic path (B_t)_t∈[0,1] in (Σ,E) connecting B_0 and B_1 there exist no Fueter sections with respect to (g_t,B_t) for all t ∈ [0,1].
Recall that_contains a compact subspaceconsisting of holomorphic triples(ℒ, α, β)withβ= 0. As a result of Theorems <ref>, <ref>, and Corollary <ref>, we obtain
For a generic choice of B ∈(Σ,E) we have
_ = _ = .
If d - τ = 0 then _ is empty for a generic choice of B ∈(Σ, E).
Here is an outline of the proof of Theorem <ref>: first, we describe Fueter sections in terms of complex-geometric data onΣ. Next, we show that this data is described by a Fredholm problem of non-positive index given by the Riemann–Roch theorem. As a result, we can apply the Sard–Smale theorem to exclude the existence of such data for a generic choice ofB∈(Σ,E).
§.§ Circle-invariance of Fueter sections
For the remaining part of the section we continue to assume thatgis a product metric onY = S^1×ΣandB ∈(Σ,E).
Let(A,Ψ,Z)is a Fueter section as in Definition <ref>. Suppose thatZ =S^1 ×DforD ⊂Σ, and that(A,Ψ)is pulled back fromΣ. Then, as in subsection <ref>, we haveΨ= (α, β)where
α∈Γ(Σ∖ D, E^* ⊗ L ⊗ K^1/2),
β∈Γ(Σ∖ D, E ⊗ L^* ⊗ K^1/2).
The Fueter equations_ABΨ=0andμ(Ψ)=0are equivalent to
{[ _ABα = 0, _ABβ = 0,; αβ = 0,; | α | = | β |.; ].
Let (A,Ψ,Z) be a Fueter section as in Definition <ref>. Then Z = S^1 × D for a finite subset D ⊂Σ. Moreover, there is a gauge transformation u ∈(Y ∖ Z) such that u has degree zero around each component of Z and u(A, Ψ) is pulled-back from Σ∖ D.
The proof is similar to that of Theorem <ref>. We use the notation from subsection <ref> and ignore the background connection B; the general proof is the same.
A Weitzenböck formula.
Let t be the coordinate on the S^1 factor of S^1 ×Σ. Unlike in the proof of Theorem <ref>, we cannot put A in a temporal gauge, even after pulling-back to ×Σ, because a priori the singular set Z could intersect the t–axis in a complicated way. However, we still have
0 = _AΨ = - σ∇_t Ψ + _AΨ,
where ∇_t = ∇_A( ∂ / ∂ t) and _A is the Dolbeault operator induced by A on the { t }×Σ slice. Let ∇_Σ be the part of the covariant derivative ∇_A in the Σ-direction. Since A is flat, we have
0 = ∇_A^2 = ∇_t ∇_Σ + ∇_Σ∇_t.
Applying σ and ∇_t^* = - ∇_t to (<ref>) and using the above commutation relation, we obtain
0 = ∇_t^* ∇_t Ψ + σ_A σ_A Ψ = ∇_t^* ∇_t Ψ + _A^* _A Ψ.
Integration by parts; the circle-invariance of Z.
We want to integrate (<ref>) by parts to conclude ∇_t Ψ = 0 and _A Ψ = 0. The equality holds only on Y ∖ Z, so we need to use a cut-off function. Let f → [0,1] be smooth and such that
{[ f = 0 on (-∞, 0],; f=1 on [1, ∞) ].
For every ϵ > 0 we define the cut-off function χ_ϵ Y→ [0,1] by
χ_ϵ(x) = f( | Ψ(x) | - ϵ/ϵ).
Let Z_ϵ be the subset of points in Y satisfying | Ψ(x) | < ϵ. We have
{[ χ_ϵ = 0 on Z_ϵ,; χ_ϵ =1 on Y ∖ Z_2ϵ ].
and χ_ϵ is smooth on Y. Take the inner product of (<ref>) with χ_ϵ^2 Ψ and integrate by parts:
0 = ∫_Y | ∇_t( χ_ϵΨ ) |^2 + ∫_Y | _A ( χ_ϵΨ ) |^2 - ∫_Y ( | ∂_t χ_ϵ |^2 + | χ_ϵ |^2 )| Ψ |^2
≥∫_Y | ∇_t( χ_ϵΨ ) |^2 + ∫_Y | _A ( χ_ϵΨ ) |^2 - 2 ∫_Y | d χ_ϵ |^2 | Ψ |^2.
We need to show that the last term becomes arbitrarily small as ϵ tends to zero. By definition, | Ψ | ≤ 2ϵ on Z_2 ϵ. Let P_ϵ = Z_2ϵ∖ Z_ϵ. By Kato's inequality
∫_Y | Ψ |^2 | d χ_ϵ |^2
≤∫_P_ϵ | Ψ |^2 | f' ( | Ψ(x) | - ϵ/ϵ) |^2/ϵ^2 | ∇_A Ψ |^2
≤ 4 f ^2_C^1∫_P_ϵ | ∇_A Ψ |^2
≤ C (P_ϵ) ∇_A Ψ_L^2(M∖ Z)^2.
The right-hand side converges to zero as ϵ→ 0 since | ∇_A Ψ |^2 is integrable, Z = ∩_ϵ >0 Z_ϵ, and (Z) = 0 by Taubes <cit.>. Taking ϵ→ 0 in (<ref>), we conclude that on Y ∖ Z
∇_t Ψ = 0 and _A Ψ = 0.
In particular,
∂_t | Ψ |^2 = 2 ⟨∇_t Ψ, Ψ⟩ = 0
so |Ψ| is invariant under the circle action on Y ∖ Z. It is also continuous on the whole of Y and | Ψ |^-1(0) = Z, so Z is necessarily of the form S^1 × D for a proper subset D ⊂Σ.
(A,Ψ) is pulled-back from Σ∖ D.
We put A in a temporal gauge over S^1 × (Σ∖ D) as in the proof of Theorem <ref>. The gauge transformation (<ref>) used to do that is the exponential of a smooth function Σ∖ D → i when restricted to each slice { t }× (Σ∖ D); thus, it has degree zero around the components of Z. The same argument as in the proof of Theorem <ref> shows that LY ∖ Z is pulled back from a bundle on Σ∖ D and (A,Ψ) is pulled-back from a configuration on Σ∖ D satisfying (<ref>).
D is a finite set of points.
It is enough to show that D is locally finite. Suppose that Σ is a unit disc and that L and E are trivial. The complement Σ∖ D is a non-compact Riemann surface and
(L, _A) defines a holomorphic line bundle over Σ∖ D which is necessarily trivial <cit.>. Thus, there is h ∈^c(B ∖ D) such that h(A) agrees with the product connection on the trivial bundle, and h α and h^-1β correspond to holomorphic maps Σ∖ D →^2. Let γ = (h α) ⊗ (h^-1β); it is a holomorphic map Σ∖ D →^2 ⊗^2 = ^4 satisfying
| γ | = | α | | β | = 1/2 | Ψ |^2,
so D is the zero set of |γ|. Thus, γ is continuous on Σ and holomorphic on Σ∖ D. By a theorem of Radó<cit.>, γ is holomorphic on Σ and so D=γ^-1(0) is locally finite.
§.§ A holomorphic description of Fueter sections
If (A,Ψ,Z) is a Fueter section, with Ψ = (α,β) and Z = S^1× D as in Proposition <ref>, then there exist h ∈^c_0(Σ∖ D) and divisors D_1, D_2 such that
* D = D_1 ∪ D_2 as sets and the divisor D_1 + D_2 is effective,
* Ã := h(A) extends to a unitary connection on a line bundle over Σ, not necessarily isomorphic to L, defining a holomorphic line bundle ℒ→Σ,
* sections α̃ = hα and β̃ = h^-1β extend to holomorphic sections that fit into the short exact sequence
0 r ℒ^-1⊗ K^-1/2⊗(D_1) rα̃ rβ̃ ℒ^-1⊗ K^1/2⊗(-D_2) r 0.
Conversely, every set of holomorphic data (Ã, α̃, β̃, D_1, D_2) satisfying conditions (1), (2), (3) can be obtained from a Fueter section (A,Ψ,Z) in this way.
This is similar to Step 3 in the proof of Proposition <ref>. Using Lemma <ref>, we find h ∈^c_0(Σ∖ D, L) such that à = h(A) extends yielding a holomorphic line bundle ℒ, say, and h^2 φ_αβ extends to a meromorphic section of ℒ^2. Let α̃ = h α and β̃ = h^-1β. Then
| α̃ |/ | β̃ | = | h^2 φ_αβ | and |α̃| | β̃ | = |α | | β | = 1/2 | Ψ |^2.
Since h^2 φ_αβ is meromorphic and |Ψ| extends to a continuous function on Σ, it follows that α̃ and β̃ extend to meromorphic sections. Let D_1 and D_2 be the associated divisors of zeroes and poles. We have D = D_1 ∪ D_2 as sets and the condition D = | Ψ |^-1(0) implies that D_1 + D_2 ≥ 0. The existence of the short exact sequence involving α̃ and β̃ was established in (<ref>).
The next lemma provides a restriction on the possible holomorphic bundlesfitting into the short exact sequence (<ref>).
Under the assumptions of Proposition <ref> there exists a holomorphic line bundle M satisfying h^0( M^2 ) > 0 and h^0( ⊗ K^1/2⊗ M^-1 ) > 0.
Recall that by Lemma <ref> we have ℒ^2 = (D_1 - D_2). Set M = ℒ^-1⊗(D_1). Then
M^2 = ℒ^-2⊗(2D_1) = (D_2 - D_1 + 2D_1) = (D_1 + D_2).
We have h^0(M^2) > 0 because the divisor D_1 + D_2 is effective. On the other hand, multiplying exact sequence (<ref>) by M^-1⊗ K^1/2, we obtain an injective map →⊗ M^-1⊗ K^1/2, which is the same as a nowhere vanishing section of ⊗ K^1/2⊗ M^-1.
§.§ Proof of Theorem <ref>
Fix k ≥ 0. Let ⊂(Σ,E) be the subset consisting of those connections B for which there exists a degree k holomorphic line bundle M →Σ satisfying
h^0(M^2) > 0 and h^0( _B ⊗ K^1/2⊗ M^-1) > 0.
The complement (Σ,E) ∖ is residual. Furthermore, for all B_0,B_1 ∈(Σ,E) ∖, a generic path in (Σ,E) connecting B_0 and B_1 is disjoint from .
We use a transversality argument similar to the one used to show Proposition <ref>. As in that case we pass to suitable Sobolev completions of the spaces of connections and sections (for simplicity we keep the same notation). The statement for C^∞ topology will follow from Taubes' trick discussed in the proof of Proposition <ref>.
Let T →Σ be a unitary line bundle of degree k. Denote F = E^* ⊗ K^1/2⊗ T^-1 and consider
(Σ,E) ×(Σ,T) ×Γ(F) ×Γ(T^2) →Ω^0,1 (F) ×Ω^0,1( T^2 ),
(B, A, ψ, α) ↦ (_ABψ, _Aα).
This map is ^c(Σ)–equivariant. Let 𝒳 be the open subset of (Σ,T)×Γ(F)×Γ(T^2)/ ^c(Σ) given by { [B,A,ψ, α] | ψ≠ 0, α≠ 0 }. Let 𝒱→(Σ, E) ×𝒳 be the Banach vector bundle obtained from taking the ^c(Σ)–quotient of the trivial bundle with fibre Ω^0,1 (F) ×Ω^0,1( T ). Then the map introduced above descends to a smooth section s (Σ, E) ×𝒳→𝒱. For every B ∈(Σ,E) the restriction s_B = s(B, ·) is a Fredholm section whose index is the Euler characteristic of the elliptic complex
Ω^0() r Ω^0,1() ⊕Γ(F) ⊕Γ(T^2) r Ω^0,1(F) ⊕Ω^0,1(T^2).
The first arrow in the complex is the linearised action of ^c(Σ), whereas the second is the linearisation of the map (A, ψ, α) ↦ (_ABψ, _A α). This elliptic complex agrees up to terms of order zero with the direct sum of the complexes
Ω^0() r Ω^0,1() r 0
and
0 r Γ(F) ⊕Γ(T^2) rr_AB⊕_A Ω^0,1(F) ⊕Ω^0,1(T^2).
By the Riemann–Roch theorem, the Euler characteristic of this complex is
χ( ) - χ(F) - χ(T^2) = (1-g) - ((F)+2-2g) - (2 (T) + 1-g) = 0
because (F) = 2g-2 - 2(T). Thus, s_B is a Fredholm section of index zero.
The proof will be completed if we can show that s is transverse to the zero section at all points [B,A,ψ,α] ∈ s^-1(0) ⊂𝒳. Indeed, if this is the case, then by the Sard–Smale theorem, the same is true for s_B for B from a residual subset of (Σ,E). For every such B the set
{ [A, ψ, α] | _ABψ = 0, _A α = 0, ψ≠ 0 α≠ 0 }
is a zero-dimensional submanifold of 𝒳. This submanifold must be empty as otherwise it would contain a subset homeomorphic to ^* given by [A, t ψ, α] for t ∈^*. This proves that for a generic B there is no holomorphic line bundle M = (T, _A) together with non-zero α∈ H^0(M^2) and ψ∈ H^0( _B ⊗ K^1/2⊗ M^-1). The statement for paths is proved in the same way.
It remains to show that s is transverse to the zero section. At a point [B,A,ψ,α] ∈ s^-1(0) the first map in (<ref>) is injective. Thus, it is enough to prove the surjectivity of the operator combining the second map of (<ref>) and the linearisation of _AB with respect to B:
Ω^0,1(End(F)) ⊕Ω^0,1() ⊕Γ(F) ⊕Γ(T^2) ⟶Ω^0,1(F) ⊕Ω^0,1(T^2)
(b,a,u,v) ↦ ( (b+a)ψ + _AB u, a α + _A v).
If the map were not surjective, there would exist a non-zero (p,q) ∈Ω^0,1(F) ⊕Ω^0,1(T^2)L^2–orthogonal to the image; which in turn would imply _AB^* p = 0, _A^* q = 0, and
⟨ b ψ , p ⟩_L^2 = 0, ⟨ a α, q ⟩_L^2 = 0
for all b ∈Ω^0,1(End(F)) and a ∈Ω^0,1(). Note that ψ and α are both non-zero and holomorphic; p and q are anti-holomorphic and at least one of them is non-zero. Using a bump function as in the proof of Proposition <ref> it is easy to construct b and a such that
⟨ b ψ , p ⟩_L^2 + ⟨ a α, q ⟩_L^2 > 0.
Theorem <ref> follows immediately from the previous results.
LetB∈(Σ,E)and denote by_Bthe corresponding holomorphic bundle. Propositions <ref> and <ref> show that a Fueter sections with respect to(g,B)corresponds to a holomorphic triple(ℒ,α, β)fitting into the short exact sequence (<ref>). On the other hand, by Lemmas <ref> and <ref>, for a generic choice ofBthe holomorphic bundle_Bdoes not fit into any such sequence. The same is true whenBvaries in a generic one-parameter family by the second part of Lemma <ref>.
§.§ Fueter sections and limiting configurations
Every limiting configurations, as in Definition <ref>, is an example of a Fueter section onY=S^1×Σ. The results established in this section allow us to construct a counterexample to the converse statement.
Suppose that the genus of Σ is positive so that the canonical divisor K is effective. Let C_1 and C_2 be two divisors satisfying C_1 + C_2 = 1/2K ≥ 0. Set
ℒ := (C_1 - C_2), D_1 := 2C_1, D_2 := 2C_2,
Then
ℒ^-1⊗ K^-1/2⊗(D_1) = (C_1 + C_2 - 1/2K) = ,
ℒ^-1⊗ K^1/2⊗(-D_2) = (-C_1 - C_2 + 1/2K) = .
Set := ⊕. A Fueter section is given by maps α̃ and β̃ making the sequence (<ref>) exact. In the present setting, (<ref>) is equivalent to
0 r rα̃ ⊕rβ̃ r 0,
so there is an obvious choice of α̃ and β̃ making the sequence exact. According to Proposition <ref>, this gives rise to a Fueter section with singular set D = D_1 ∪ D_2. However, such a Fueter section is not a limiting configuration unless both divisors C_1 and C_2 are effective.
§ MODULI SPACES OF FRAMED VORTICES
By Theorem <ref>, for a generic choice of a circle-invariant parameter,is homeomorphic to the compact spaceintroduced in Definition <ref>. In this section we prove thatis a Kähler manifold and that the signed count of Seiberg–Witten multi-monopoles onY=S^1×Σis the signed Euler characteristic of, which proves Theorem <ref>. We then establish some general properties ofusing methods of complex geometry.
§.§ Framed vortices
We continue to assume throughout this section thatEis an(2)–bundle and thatd - τ< 0whered = Landτ= ∫_Σ iη/ 2π[Most of the results generalise easily to the other cases. We will later discuss the role of the sign of d - τ.].depends on the conformal structure onΣ, holomorphic structure_B = (E^*, _B), andd. Its points can be interpreted in three ways.
* As isomorphism classes of pairs (ℒ, α), where ℒ→Σ is a degree d holomorphic line bundle and α is a non-zero holomorphic section of _B ⊗ℒ⊗ K^1/2.
* As ^c(Σ)–equivalence classes of pairs
(A, α) ∈(Σ,L) ×Γ(Σ, E^* ⊗ L ⊗ K^1/2)
satisfying _ABα = 0 and α≠ 0.
* As (Σ)–equivalence classes of pairs (A, α) as above satisfying
{[ _ABα = 0 and α≠ 0,; i ∗ F_A + | α |^2 - i ∗η = 0, ].
Following <cit.>, we will refer toas the moduli space of framed vortices.
§.§ Deformation theory
Here we relate the deformation theories ofand_.
For every conformal class of a metric g on Σ there exists a residual subset ^reg(g) ⊂(Σ, E) such that for every B ∈^reg(g)
* = (g,_B) is a compact Kähler manifold of complex dimension g(Σ)-1+2d,
* _ = _(g,_B) is Zariski smooth and the inclusion ↪_ is a homeomorphism inducing an isomorphism of Zariski tangent spaces at every point,
* the relative orientation on the obstruction bundle →_ is compatible with the orientation of the cotangent bundle T^* → induced from the complex structure; equivalently, for every connected component C of _ we have (C) = (-1)^g(Σ)-1.
The following conditions are equivalent:
* is regular as the moduli space of framed vortices.
* _ is compact and equal to .
* There exist no triple (ℒ, α, β) consisting of a degree d holomorphic line bundle ℒ→Σ and non-zero holomorphic sections α∈ H^0( _B ⊗ℒ⊗ K^1/2) and β∈ H^0( _B^* ⊗ℒ^* ⊗ K^1/2) satisfying αβ = 0 ∈ H^0(K).
The equivalence of (1) and (2) follows from Corollary <ref> and the identification of the obstruction bundle ⋃_A,α H^2_A,α with _∖_, shown in the proof of Theorem <ref>. The equivalence of (2) and (3) is obvious from the definition of _.
The diffeomorphism type of (g, _B) does not depend on the metric on Σ and the connection B ∈(Σ,E), as long as B is generic.
Let g_0, g_1 be metrics on Σ and B_0 ∈(g_0), B_1 ∈(g_1) as in Theorem <ref>. For a generich path (B_t) in (Σ,E) connecting B_0 and B_1 there exist no Fueter sections with respect to (g_t,B_t) and so (g_t,B_t) is compact for all t∈[0,1]. By Corollary <ref>, (g_t,_B_t) = (g_t,B_t) is compact and regular as the moduli space of framed vortices. Thus, ⋃_t∈[0,1](g_t,_B_t) → [0,1] is a smooth fibre bundle and every fibre (g_t,_B_t) is diffeomorphic to (g_0,_B_0).
The construction of an analytic structure onfollows the general scheme that by now is familiar to the reader.
Consider the elliptic complex associated to a solution(A,α)of_AB α= 0:
Ω^0() rG^c_A,α Ω^0,1() ⊕Γ(E^* ⊗ L ⊗ K^1/2) rT_A, α Ω^0,1(E^* ⊗ L ⊗ K^1/2).
whereG^c_A,αis the linearised action of^c(Σ)G^c_A,α(f) = (- f, f α) for f ∈Ω^0(),
andT_A,αis the linearisation of the Dolbeault operator
T_A,α( a^0,1, ϕ) = ( _ABϕ + a^0,1α ) for (a^0,1, ϕ) ∈Ω^0,1() ⊕Γ(E^* ⊗ L ⊗ K^1/2).
Denote byH^0_A, α,H^1_A,α, andH^2_A, αthe homology groups of this complex. By definitionconsists of solutions withα≠0, soH^0_A,α = 0. On the other hand, the deformation complex is isomorphic modulo lower order term to the sum of the Dolbeault complexes foronΩ^0()and_ABonE^* ⊗L ⊗K^1/2(with a shift). By the Riemann–Roch theorem the expected complex dimension ofis
_ H^1_A, α - _ H^2_A,α = χ(Σ, ) - χ(Σ, ⊗ℒ⊗ K^1/2 ) = g(Σ) -1 + 2d.
The proof proceeds in three steps.
is a compact Kähler manifold.
We already know that is compact. By Corollary <ref>, _ = for a generic B.
One can show that is generically smooth in the same way as in Proposition <ref>.
Alternatively, we can interpret the elements of H^2_A,α as Fueter sections:
H^2_A,α = T_A,α^* = { q ∈Γ(E^* ⊗ L ⊗ K^-1/2) | α q = 0, _AB^* q = 0 }.
Every non-zero element of H^2_A,α gives rise to a non-zero β = q∈Γ(E ⊗ L^* ⊗ K^1/2) satisfying _ABβ = 0 and αβ = 0. Thus, the triple (A,α,β) is an element of _∖_ corresponding to a Fueter section as in Proposition <ref>. By Theorem <ref>, for a generic B there are no Fueter sections so H^2_A,α = 0 for all [A,α] ∈. This implies that is a complex manifold of dimension g(Σ) - 1 + 2d whose holomorphic tangent space at [A,α] is H^1_A,α. It admits a natural Hermitian metric induced from the L^2–inner product on the space of connections and sections. This metric is Kähler because is the moduli space of solutions of the framed vortex equations (<ref>), which is an infinite-dimensional Kähler quotient. For details, see <cit.>.
H^1_A,α is naturally isomorphic to the Zariski tangent space to _ at [A,α,0].
H^1_A,α consists of pairs (a^0,1,u) ∈Ω^0,1() ⊕Γ(E^* ⊗ L ⊗ K^1/2) satisfying the linearised equation
_AB u + a^0,1α = 0
together with the complex Coulomb gauge (G^c_A,α)^*(a^0,1,u) = 0. By (<ref>), the tangent space to _ at consists of triples (a^0,1, u, v) where a^0,1, u are as above, v ∈Γ(E ⊗ L^* ⊗ K^1/2), and
{[ _AB u + a^0,1α = 0,; _AB v = 0,; α v = 0 ].
together with the complex Coulomb gauge for (a^0,1, u,v). Any non-zero v satisfying the conditions above would give an element (A, α, v) of _∖_. Since B has been chosen so that _∖_ is empty, v = 0 and the equations obeyed by (a^0,1, u, 0) are identical to the ones defining H^1_A, α. We conclude that the Zariski tangent spaces to and _ are equal.
Comparing the orientations.
Let (A, Ψ) be an irreducible solution of the Seiberg–Witten equations. We have Ψ = (α, 0) where (A,α) is a solution of the framed vortex equations. Consider the extended Hessian operator introduced in subsection <ref>:
L_A,ΨΩ^1(i ) ⊕Ω^0(i ) ⊕Γ(E^* ⊗ S ⊗ L ) ⟶Ω^1(i ) ⊕Ω^0(i ) ⊕Γ(E^* ⊗ S ⊗ L )
Write L_A,Ψ = L_A,0 + P, where
L_A,0 =
(
[ ∗ d -d 0; - d^* 0 0; 0 0 _AB ])
and
P(a,v, ϕ) = ( i ⟨Ψ, ϕ⟩ , - 2 ∗μ(ϕ, Ψ) , - a ·Ψ + v Ψ ).
The kernel and cokernel of L_A,0 are naturally identified with
H^1( Y, i ) ⊕ H^0 ( Y, i ) ⊕_AB.
The isomorphism between L_A,0 and L_A,Ψ, defining the relative orientation on the obstruction bundle, factors through the determinant space P of the finite dimensional map
P H^1( Y, i ) ⊕ H^0 ( Y, i ) ⊕_AB→ H^1( Y, i ) ⊕ H^0 ( Y, i ) ⊕_AB
induced from the zeroth order operator P defined above (for simplicity we use the same letter to denote the induced finite dimensional map). As in the proof of Proposition <ref>, we have H^1(Y, i ) = H^1( S^1 , i ) ⊕ H^0,1(Σ). Consider the complex structure on H^1(S^1, i) ⊕ H^0(Y, i) coming from the identification
H^1(S^1, i) ⊕ H^0(Y, i) = i⊕ i = .
Let dt be the one-form spanning H^1(S^1, ). Under the Clifford multiplication, dt acts as the multiplication by i on S, and so i dt acts as the multiplication by -1. Hene, under the isomorphism H^1(S^1, i) ⊕ H^0(Y, i) =, the map (a,v) ↦ (- a ·Ψ + v Ψ) is given by (x+iy) ↦ (x+iy) Ψ and so, in particular, it is complex linear. Next, consider the first two components of P, that is the map ϕ↦ (i (Ψ, ϕ), -2∗μ(ϕ, Ψ)). Decompose the moment map into μ = μ_⊕μ_ as in the proof of Proposition <ref>. The map ϕ↦μ_(ϕ, Ψ) is complex linear from _AB to H^0,1(Σ). We are left with the map from _AB to H^1(S^1, i ) ⊕ H^0(Y, i) given by
ϕ↦ (i (Ψ, ϕ) - 2 ∗μ_(ϕ, Ψ)).
We have ϕ = (u,v) under the splitting S = K^1/2⊕ K^-1/2. Following the identifications from the proof of Proposition <ref> we find that ∗μ_(ϕ, Ψ) = - 2 i ( α, u) and so our map is
ϕ = (u,v) ↦ (i ( α, u) , - 4 i (α, u ).
Up to a constant, it coincides with the complex linear map
u ↦ - (α,u) + i(α, u) = - (α, u) = - (u, α).
We conclude that the isomorphism P ≅ L_A,0 agrees with the orientations induced from the complex structures on the cohomology groups. The same is true for P ≅ L_A,Ψ where the complex structures on H^1_A,Ψ = L_A,Ψ and H^2_A,Ψ = coker L_A,Ψ come from the isomorphism of analytic spaces ≅_ given by Theorem <ref>. The tangent and obstruction spaces to _ are canonically identified with the tangent space to . Therefore, the relative orientation on the obstruction bundle agrees on the complex orientation on T^* →.
§.§ Dependence on the perturbing two-form
We have so far ignored the fact that for fixedd = Lthere are two definitions of_depending on the sign ofd - τ[The case d- τ =0 is uninteresting as the moduli space is generically empty, see Corollary <ref>.], see Definition <ref>. Recall thatτ:= ∫_Σ i η/2πdepends on the choice of the perturbing two-formη. In classical Seiberg–Witten theory, the moduli space of solutions onS^1 ×Σis either^d+g-1 Σor^-d+g-1 Σ, depending on the sign ofd - τ. On the other hand, the Seiberg–Witten invariant does not depend on the choice of the perturbing two-form, which is reflected by the identity
χ(^d+g-1Σ) = χ(^-d+g-1Σ).
Theorem <ref> gives us an analogous identity for the moduli spaces of Seiberg–Witten multi-monopoles. Fix(g,B), withBgeneric, and denote by_^+(d)and_^-(d)the moduli spaces corresponding to a given degreedand two choices of the sign ofd - τ. (In the previous subsections we have always assumed_ = _^+.) The map(ℒ, α, β) ↦(ℒ^*, β, α)induces an isomorphism_^+(d) ⟶_^-(-d). This is in agreement with the standard involution in Seiberg–Witten theory <cit.>.
Ifg(Σ) ≥1, the count of Seiberg–Witten multi-monopoles(-1)^g(Σ)-1χ(_^+(d))does not depend on the choice of a generic choice of(B,η)as long asd - τ< 0. The same is true for(-1)^g(Σ)-1χ(_^-(d))whend - τ> 0. On the other hand, the choice ofηis immaterial from the viewpoint of the three-dimensional theory—by Theorem <ref>,(g,B,η)does not depend on the choice ofηas long as the moduli space is compact and Zariski smooth. We conclude that for a generic choice ofB ∈(Σ,E)we haveχ( _^+(d) ) = χ( _^-(d)). Combining this with the isomorphism_^+(d) ⟶_^-(-d), we obtain
If g(Σ) ≥ 1, then for a generic choice of B ∈(Σ,E) we have
χ( _^+(d) ) = χ( _^+(-d)).
Although the moduli spaces can be defined in terms of the complex geometry ofΣ, it is far from obvious how to prove the above equality without a reference to the three-dimensional theory. We will see in the next section that_^+(d)and_^-(d)can be non-homeomorphic.
§ EXAMPLES AND COMPUTATIONS
In this section we study_using methods of complex geometry. We prove some general properties of the moduli spaces and give their complete description whenΣis a Riemann surface of genus zero, one, or two.
Let g be a product metric on Y = S^1 ×Σ, B a generic connection pulled-back from Σ, and η∈Ω^2(Σ, i) a two-form satisfying τ(η) > 0.
Set =(g,B,η) and = (g,B, η).
* If d < (1-g(Σ))/2, then is empty and = 0.
* If d ≥ 0, then admits a holomorphic map to the Jacobian torus of Σ. Its fibres are projective spaces. If d > 0, this map is surjective. If d = 0, its image is a divisor in the linear system |2 Θ| where Θ is the theta divisor.
* If d ≥ g(Σ)-1, then is biholomorphic to the projectivisation of a rank 2d holomorphic vector bundle over the Jacobian of Σ and = 0.
* If d = 0 and g(Σ) = 1, then consists of two points and = 2.
* If d = 0 and g(Σ) = 2, then is biholomorphic to a closed Riemann surface of genus five and = 8.
We will denote by _(d, ) the moduli space defined using a degree d line bundle L, holomorphic structure on E^*, and any perturbing two-form η satisfying d - η < 0; similarly we define (d, ). A property will be said to hold for a generic if it holds for all of the form _B = (E, _B) for B from a residual subset of (Σ,E).
§.§ Generalised theta divisors
Ford=0the moduli spaces of framed vortices are related to generalised theta divisors<cit.>. Let→Σbe a rank two holomorphic stable bundle with trivial determinant. For any line bundleA ∈J^g(Σ)-1the Riemann–Roch theorem gives us
χ( ⊗ A) = 2 (A) + 2(1-g(Σ)) = 0,
so we expectH^0(⊗A)= H^1( ⊗A) = 0ifAis generic.
The generalised theta divisor of is
θ() := { A ∈ J^g(Σ)-1 | h^0(⊗ A) > 0 };
One can show thatθ()is a divisor[This is no longer true if is of higher rank as it can happen that θ()=J^g(Σ)-1.] inJ^g(Σ)-1in the linear system| 2 Θ| = ℂℙ^2g(Σ)-1, whereΘis the classical theta divisor
Θ := { A ∈ J^g(Σ)-1 | h^0(A) > 0 }.
If is a rank two holomorphic stable bundle with trivial determinant, then there is a surjective morphism (0, ) →θ() whose fibres are projective spaces.
A point in (0, ) is an equivalence class [ℒ, α] where ℒ∈ J^0 and α∈ H^0( ⊗ℒ⊗ K^1/2), with α≠ 0. Since (ℒ⊗ K^1/2) = g(Σ)-1, we have ℒ⊗ K^1/2∈θ(). The morphism (0, ) →θ() is given by [ ℒ, α ] ↦ℒ⊗ K^1/2. The preimage of ℒ⊗ K^1/2 is ℙ H^0( ⊗ℒ⊗ K^1/2).
The divisor θ() can be described geometrically as follows. By a theorem of Lefschetz, the linear system | 2 Θ | is base-point free and gives rise to a holomorphic map J^g(Σ)-1→ | 2 Θ |^*. It follows that θ(), as a subset of J^g(Σ)-1, is the preimage of a hyperplane in | 2 Θ |^* under the map J^g(Σ)-1→ |2 Θ|^*. This hyperplane is easy to identify—it is exactly θ() thought of as a point in | 2 Θ | or, equivalently, as a hyperplane in | 2 Θ |^*. Varying the background bundle , we vary the corresponding hyperplane and therefore the divisor θ() ⊂ J^g(Σ)-1.
§.§ General properties of the moduli spaces
For a generic choice of the following holds.
* If d < 1-g(Σ)/2, then _(d, ) is empty.
* If d ≥ 0, then _(d, ) is non-empty.
* If d ≥ g(Σ)-1, then _(d, ) is Zariski smooth with the underying complex manifold biholomorphic to the projectivisation of a rank 2d vector bundle over J^d.
Proposition <ref> shows that the most interesting case is (1-g(Σ))/2 ≤ d < 0. It is an interesting question whether _(d, ) is generically non-empty for d in this range.
The proof of Proposition <ref> relies on the following general result about holomorphic vector bundles on Riemann surfaces. Recall thatstable if for any holomorphic line bundleAthe existence of a non-zero holomorphic mapA →implies(A) < 0.
If g(Σ) ≥ 2, then there is an open dense subset of (Σ, E) such that for every connection B from this subset the corresponding holomorphic bundle _B is stable.
fails to be stable if and only if there is a holomorphic line bundle ℒ with ( ℒ) = d ≥ 0 and a non-zero map θℒ→_B. In other words, if L is a unitary bundle underlying ℒ and A is a connection inducing ℒ, then θ∈Γ(L^* ⊗ E) satisfies _ABθ = 0.
Consider
_d := { B∈(Σ,E) | _AB = { 0 } for all A ∈(Σ,L) }
for a fixed degree d unitary bundle L.
_d is open.
Let B∈_d. For every A∈(Σ,L) there is a neighbourhood of (A,B) in (Σ,L) ×(Σ, E) such that for all (A',B') from this neighbourhood _A'B' = 0. Since this condition is invariant under the action of ^c(Σ), and A(Σ, L) / ^c(Σ) is homeomorphic to a torus—in particular, compact—there is a neighbourhood of B in (Σ, E) such that for all B' from this neighbourhood _AB' = 0 for all A. All such B' belong to _d which proves that _d is open.
_d is dense.
We only outline the proof as it is similar to that of Proposition <ref>. In what follows we replace the spaces of connections and sections by their Sobolev completions. Cover (Σ, L) / ^c(Σ) by finitely many charts that can be lifted to ^c(Σ)–slices in (Σ,L). Let V be such a chart; it is an open subset of ^2g parametrising a smooth family of connections { A_x }_x ∈ U. The claim will follow if can show that
_V := { B∈(Σ,E) | _A_x B = { 0 } for all x∈ V }
is dense in (Σ,E). To prove this, consider
S := {θ∈Γ(Σ, L^* ⊗ E) | θ_L^2 = 1 }
and the map
f (Σ, E) × V × S ⟶Ω^0,1(Σ, L^* ⊗ E)
f(B,x,θ) := _A_x Bθ.
For every B ∈(Σ,E) the restriction f_B f(B, ·, ·) is a Fredholm map (between suitable Sobolev spaces) because its derivative is the sum of _A_x B and the derivative with respect to x, which is a finite-dimensional operator. By the Riemann–Roch theorem,
ind_ df_B = V + 2 ind _A_x B - 1
= 2g + 4(-d+1-g(Σ)) -1
= 2(-2d + 2 - g(Σ)) - 1 ≤ 0,
where we subtract 1 because _A_x B is restricted to the tangent space to S. A computation similar to that in the proof of Proposition <ref> shows that the derivative of the full map f is surjective at every point of f^-1(0). By the Sard–Smale theorem, the set f_B^-1(0) is empty for B from a dense subset of (Σ,E); all such B belong to _V.
:=⋂_d≥ 0_d is open and dense.
is dense by Baire's theorem; it is open by the following argument. By (<ref>), the existence of a destabilising map θℒ→_B implies
0 ≤ d ≤ h^0(Σ, _B) + 2g(Σ)-2.
The right-hand side can only decrease when B is replaced by a sufficiently close B'. (Indeed, if we split Γ(Σ,E) into _B and its L^2–orthogonal complement Q, then by the elliptic estimate _B' is non-degenerate when restricted to Q for all nearby connections B'; it follows that the projection _B'→_B is injective. See also
<cit.> for an algebro-geometric proof.) Therefore, to guarantee that a nearby connection B' belongs to it is enough to check that it belongs to _d for finitely many values of d. Thus, for every B ∈ there are finitely many open neighbourhoods of B whose intersection lies entirely in .
By Theorem <ref>, for a generic choice of the moduli space _ = is a compact complex manifold of dimension g(Σ) - 1 + 2d. If d < (1-g(Σ))/2, then this dimension is negative and _ must be empty.
The case d = 0 was discussed in the previous subsection. Let (ℒ) = d > 0; then
h^0( ⊗ℒ⊗ K^1/2 ) - h^1( ⊗ℒ⊗ K^1/2) = 2d > 0;
thus, ℒ is in the image of the projection π_→ J^d given by [ℒ, α, β] ↦ℒ. For a generic , by Theorem <ref>, _ = and so π^-1(ℒ) = H^0( ⊗ℒ).
For the proof of the third item, assume g(Σ) ≥ 2; the cases g(Σ) = 0, 1 will be considered separately in the next section. By Lemma <ref>, a generic is stable and by Serre duality,
h^1( ( ⊗ℒ⊗ K^1/2 ) = h^0 (( ^* ⊗ℒ^* ⊗ K^1/2 ).
Any element of H^0 (( ^* ⊗ℒ^* ⊗ K^1/2 ) gives a holomorphic map ℒ⊗ K^-1/2→^*. We have
(ℒ⊗ K^-1/2) = d - g + 1 ≥ 0.
Since ^* is stable, it follows that any holomorphic map ℒ⊗ K^-1/2→^* is trivial. Thus, h^1( ( ⊗ℒ⊗ K^1/2 ) = 0 and by the Riemann–Roch theorem h^0( ⊗ℒ⊗ K^1/2 ) = 2d for every ℒ∈ J^d. We conclude that π_→ J^d is the projectivisation of the rank 2d vector bundle whose fibre over ℒ is the cohomology group H^0( ⊗ℒ⊗ K^1/2 ) = ^2d.
§.§ Genus zero
LetΣ= ℂℙ^1with its unique complex structure. Fork∈ℤdenote by(k)the unique holomorphic line bundle of degreek;K^1/2 = (-1)is the unique spin structure. By a theorem of Grothendieck, every holomorphic bundle overℂℙ^1is the direct sum of line bundles. In particular, every holomorphic(2,)–bundle is of the form= (k) ⊕(-k)for somek ≥0, withk = 0being the generic case.
Let Σ = ℂℙ^1 and = (k) ⊕(-k) for k ≥ 0.
* If d ≤ 0 and k ≤ |d|, then _(d, ) is empty.
* If d > 0 and k ≤ d, then _(d, ) is Zariski smooth with the underlying complex manifold biholomorphic to ℂℙ^2d-1.
* If k > |d|, then _(d, ) is non-compact and its compactification _(d, ) is homeomorphic to a locally trivial ℂℙ^k-d–fibration over ℂℙ^k+d.
The fact that the Euler characteristic of the moduli space depends on the sign of d is consistent with the three-dimensional theory. Since b_1( S^1 ×ℂℙ^1) = 1 we do not expect the count of Seiberg–Witten multi-monopoles to be invariant, even when there are no Fueter sections; by Proposition <ref>, there are two chambers in the set of parameters which are separated by a codimension one wall where reducibles appear. As discussed in subsection <ref>, replacing d by -d can be seen as varying τ so that the sign of d - τ changes; any path of parameters joining such two choices of τ will pass through the wall of reducibles.
_(d, ) consists of the equivalences classes of pairs (α, β) such that
α∈ H^0( (k+d-1) ) ⊕ H^0((-k + d-1)),
β∈ H^0( (-k-d-1) ) ⊕ H^0( (k-d-1) ),
α≠ 0, and αβ = 0 ∈ H^0( (-2))—this is automatically satisfied since h^0((-2)) = 0.
If is generic, so that k = 0, then d ≤ 0 which implies that α = 0 and _(d,) is empty. If d > 0, then α∈ℂ^2d and β = 0; it follows that _(d, ) = (d, ) = ℂℙ^2d-1.
The same description of _(d, ) is valid in the non-generic case k ≠ 0 as long as k ≤ |d|. When k > |d| the moduli space is no longer compact and Fueter sections appear. If k > d > 0, then α∈^k+d, β∈^k-d and _(d, ) is the total space of the vector bundle (-1)^⊕ (k-d) over ℂℙ^k+d-1. The compactification _(d, ) is the ℂℙ^k-d-bundle over ℂℙ^k+d-1 obtained from the projectivisation of the vector bundle (-1)^⊕ (k-d)⊕.
§.§ Genus one
LetΣ= S^1 ×S^1equipped with a complex structure making it into an elliptic curve. Isomorphism classes of line bundles of a given degreedonΣform the JacobianJ^dwhich is isomorphic to the dual torusΣ^*. The canonical bundle ofΣis trivial and without loss of generality we can takeK^1/2also to be trivial. Holomorphic vector bundles over elliptic curves were classified by Atiyah <cit.>. A generic holomorphicSL(2,)–bundleis of the form= A ⊕A^-1for a degree zero line bundleA →Σ. We may moreover assume thatA^2 ≠since there are only four line bundles satisfyingA^2 = .
Let Σ be an elliptic curve. Suppose that is generic, that is of the form = A ⊕ A^-1 for A ∈ J^0 satisfying A^2 ≠.
* If d < 0, then _(d, ) is empty.
* If d > 0, then _(d, ) is Zariski smooth with the underlying complex manifold biholomorphic to the projectivisation of a rank 2d vector bundle over J^d.
* If d = 0, then _(d, ) is regular and consists of two points.
If d >0, then the cohomology ring H^*( _(d, ), ) is isomorphic as H^*( J^d, )–modules to H^*( J^d, ) [ H ] / (H^2d) where (H) =2. In particular,
χ( _(d, ) ) = χ( J^d ) χ( ℂℙ^2d-1) = 0,
which is consistent with the fact that χ( _(d, ) ) is invariant under the change d ↦ -d.
_(d, ) consists of equivalence classes of triples (ℒ, α, β) where ℒ∈ J^d and
α∈ H^0( A ⊗ℒ) ⊕ H^0(A^-1⊗ℒ),
β∈ H^0(A^-1⊗ℒ^-1) ⊕ H^0( A ⊗ℒ^-1),
satisfying α≠ 0 and αβ = 0 in H^0( ) =.
For d < 0 we must have α = 0 and so the moduli space is empty. For d = 0 the only choices of ℒ for which α is possibly non-zero are ℒ = A^-1 and ℒ = A. If ℒ = A^-1, then
α∈ H^0( ) ⊕ H^0( A^-2),
β∈ H^0( ) ⊕ H^0( A^2).
Since A^2 and A^-2 are non-trivial, so they have no non-zero sections. The only possibly choice for α, up to scaling, is therefore α = (1,0) and the condition αβ = 0 forces β to be zero since the pairing H^0( ) × H^0( ) → H^0( ) is simply the multiplication ×→. We repeat the same argument for ℒ = A and conlude that _(d, ) consists of two isolated points. In particular it is compact and so Zariski smooth thanks to Corollary <ref>. Since it is also has the correct dimension zero, we conclude that each of the points is regular.
Consider now the case d > 0. For every ℒ∈ J^d the Riemann–Roch theorem gives us
h^0( ℒ⊗ A ) - h^1( ℒ⊗ A) = d
and by Serre duality, h^1( ℒ⊗ A ) = h^0( ℒ^-1⊗ A) = 0 because (ℒ^-1)<0, Thus H^0 (ℒ⊗ A) = ^d and the same is true if A is replaced by A^-1. Therefore, α is identified with a non-zero vector in ^2d. On the other hand, β = 0 since A^± 1⊗ℒ^-1 has no non-trivial sections. Since the above discussion is valid for any ℒ∈ J^d, it follows that _(d, ) is a locally trivial ℂℙ^2d-1–fibration over J^d: the projectivisation of a rank 2d holomorphic vector bundle over J^d given by the push-forward of the Poincaré line bundle → J^d ×Σ to the first factor.
It is worthwhile discussing some non-generic examples. The cases when= A ⊕A^-1and either(A) ≠0orA^2 = are similar to the ones already considered. Another possibility is thatis indecomposable, in which case it is of the form= _0 ⊗AwhereA ∈J^0satisfiesA^2 = and_0is the unique non-trivial bundle obtained as an extension
0 r r _0 r r 0.
The line bundleAis uniquely determined by.
Suppose without loss of generality that = _0. It is shown in <cit.> that if h^0( ⊗ℒ ) ≠ 0, then either (ℒ)>0 or ℒ = in which case we have h^0( ) = 1. We conclude that when d < 0 or d > 0 the moduli space _(d, ) is, respectively, empty or the projectivisation of a vector bundle over J^d. On the other hand, for d = 0 the only choice of ℒ for which h^0(⊗ℒ) > 0 is ℒ = and we look for holomorphic sections
α∈ H^0( ) = ,
β∈ H^0( ^* ) =
such that α≠ 0 and αβ = 0. Up to scaling, α = 1. We will show now that the pairing H^0( ) × H^0( ^* ) → is trivial and, as a consequence, β can be chosen arbitrarily. Let Ω∈ H^0( Λ^2 ) be a nowhere vanishing holomorphic volume form. It induces an isomorphism →^* given by v ↦Ω(v, ·). If α is a generator of H^0( ), then γ = Ω(α, ·) is a non-zero holomorphic section of H^0( ^*) and so it must be a generator. On the other hand, γ(α) = Ω(α, α) = 0 since Ω is skew-symmetric—this shows that αβ = 0 for every β∈ H^0( ^*). Therefore, _(0, ) is homeomorphic to . Its compactification _(0, ) is homeomorphic to ℂℙ^1.
§.§ Genus two
LetΣbe a genus two Riemann surface. By Lemma <ref>, a generic holomorphic bundle onΣis stable. The proof of the next lemma can be found in <cit.>.
Let W be a stable rank two vector bundle with trivial determinant over a genus two closed Riemann surface Σ. If ξ→Σ is a degree 1 line bundle, then
* h^0(W ⊗ξ) ≤ 1.
* Any non-zero homomorphism ξ^* → W is everywhere injective.
Let Σ be a closed Riemann surface of genus two. For a generic holomorphic (2,)–bundle →Σ we have the following description of _(d, ).
* If d < 0, then _(d, ) is empty.
* If d = 0, then _(d, ) is Zariski smooth with the underlying complex manifold biholomorphic to a closed Riemann surface of genus five.
* If d > 0, then _(d, ) is Zariski smooth with the underlying complex manifold biholomorphic to the projectivisation of a rank 2d vector bundle over J^d.
Items (2) and (3) follow from Proposition <ref> For d = 0 we use the relation between the moduli space of framed vortices and theta divisors described in subsection <ref>. Let →Σ be a stable (2,)–bundle. Let 𝒮𝒰(2) be the compactification of the moduli space of such bundles obtained by adding the S–equivalence classes of semi-stable bundles. As explained in <cit.>, we have | 2 Θ | = ℂℙ^3 and the map introduced in subsection <ref>
θ𝒮𝒰(2) →ℂℙ^3
↦θ()
is an isomorphism. Recall that θ() can be seen either as a subset of the Jacobian J^1
θ() = { A ∈ J^1 | h^0( ⊗ A) > 0 }
or as the corresponding point in | 2 Θ |.
J^1 is a two-dimensional complex torus and the map J^1 → |2 Θ|^* = (ℂℙ^3 )^* induces a degree four embedding of the Kummer surface J^1 / _2. Thus, as a subset θ() ⊂ J^1 is the preimage of the intersection of the Kummer surface in (ℂℙ^3 )^* with the hyperplane θ() ∈ℂℙ^3 under the quotient map J^1 → J^1 / ^2. Since θ𝒮𝒰(2) →ℂℙ^3 is an isomorphism, by changing the background bundle we can obtain all hyperplanes in (ℂℙ^3)^*. In particular, for a generic choice of , the hyperplane θ() will avoid all the 16 singular points of J^1 / _2 and the intersection J^1/ _2 ∩θ() will be a smooth complex curve of degree four and genus three. Its preimage under J^1 → J^1 / _2 is a smooth curve C⊂ J^1 of genus five, by the Hurwitz formula.
Let π_( 0 , ) → C be the composition of _(0, ) →(0, ) with the projection (0, ) →θ() = C from subsection <ref>. We claim that this map is an isomorphism for a generic choice of . In order to prove that, it is enough to check that the fibre over any line bundle ℒ⊗ K^1/2 in C consists of one point. This is equivalent to showing that H^0( ⊗ℒ⊗ K^1/2) is spanned by a single non-zero section α and if β∈ H^0( ^* ⊗ℒ^* ⊗ K^1/2) is any section satisfying αβ = 0, then β = 0. The first claim follows immediately from Lemma <ref>. As regards the second claim, assume that α and β are as above and β≠ 0. By Lemma <ref>, the homomorphisms
αℒ^* ⊗ K^-1/2⟶ and βℒ⊗ K^-1/2⟶^*
are everywhere injective. Since =2, αβ = 0 implies the exactness of the sequence
0 r ℒ^* ⊗ K^-1/2rα rβ^t ℒ^* ⊗ K^1/2r 0.
Since =, we conclude ℒ^2 =. There are 16 line bundles satisfying this condition: order two elements of J^0. For each of them, all possible non-trivial extensions as above are classified by the corresponding extension class in ℙH^1(K^-1) = ℂℙ^2. (Note that the extension is non-trivial because is stable.) Thus, all stable bundles that can be represented as such an extension form a proper subvariety of 𝒮𝒰(2) = ℂℙ^3 consisting of the images of 16 maps ℂℙ^2 →𝒮𝒰(2). A generic stable bundle will not belong to this subvariety. In this case, we conclude that each fibre of the map π consists of a single point and π is an isomorphism. In particular, _(0, ) is compact and therefore Zariski smooth by Corollary <ref>.
Note that the last part of the proof was unnecessary; we already know that generically _ = is compact and Zariski smooth which is enough to conclude = C. On the other hand, the argument presented above identifies the locus of those semi-stable bundles ∈𝒮𝒰(2) for which Fueter sections appear. It consists of strictly semi-stable bundles A ⊕ A^-1, for some A ∈ J^0, which form the Kummer surface J^1 / _2 in 𝒮𝒰(2) = ℂℙ^3, and stable bundles that arise from an extension of the form (<ref>) for some element ℒ∈ J^0 of order two. The latter form a subvariety covered by the images of 16 maps ℂℙ^2 →ℂℙ^3.
alphanum_n.bst |
http://arxiv.org/abs/1701.07834v1 | 20170126190004 | Proper motions and structural parameters of the Galactic globular cluster M71 | [
"M. Cadelano",
"E. Dalessandro",
"F. R. Ferraro",
"P. Miocchi",
"B. Lanzoni",
"C. Pallanca",
"D. Massari"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
1 Dipartimento di Fisica e Astronomia,
Università di Bologna, Viale Berti Pichat 6/2, I-40127 Bologna,
Italy
2 INAF - Osservatorio Astronomico di Bologna,
Via Ranzani 1, I-40127 Bologna, Italy
3 Kapteyn Astronomical Institute, University of
Groningen, P.O. Box 800, 9700 AV Gröningen, The Netherlands
By exploiting two ACS/HST datasets separated by a temporal baseline of
∼7 years, we have determined the relative stellar proper motions
(providing membership) and the absolute proper motion of the Galactic
globular cluster M71. The absolute proper motion has been used to
reconstruct the cluster orbit within a Galactic, three-component,
axisymmetric potential. M71 turns out to be in a low latitude
disk-like orbit inside the Galactic disk, further supporting the
scenario in which it lost a significant fraction of its initial mass.
Since large differential reddening is known to affect this system, we
took advantage of near-infrared, ground-based observations to
re-determine the cluster center and density profile from direct star
counts. The new structural parameters turn out to be significantly
different from the ones quoted in the literature. In particular, M71
has a core and a half-mass radii almost 50% larger than previously
thought. Finally we estimate that the initial mass of M71 was likely
one order of magnitude larger than its current value, thus helping to
solve the discrepancy with the observed number of X-ray sources.
§ INTRODUCTION
Galactic globular clusters (GCs) are dense and old (t>10 Gyr)
stellar systems containing up to ∼10^6 stars, orbiting the Milky Way
halo and bulge. Their study is crucial to understand
the dynamical evolution of collisional systems
<cit.> and the interplay between dynamics
and stellar evolution <cit.>. Their high central densities provide
the ideal ground to the formation of exotic objects like blue
straggler stars, cataclysmic variables, low-mass X-ray binaries and
millisecond pulsars <cit.>.
In this respect, remarkable is the case of M71, which is a low-density
GC located at a distance of about 4 kpc from Earth. It has a quite
high metallicity ([Fe/H]=-0.73), a color excess E(B-V)=0.25
<cit.> and a total mass of about 2 ×
10^4 M_⊙ <cit.>. X-ray observations revealed that it
hosts a large population of X-ray sources, most likely consisting of
stellar exotica. Surprisingly, as discussed in <cit.>, the number of X-ray detections in M71 is significantly
larger than what is expected from its present-day mass and its
collisional parameter (which is a characteristic indicator of the
frequency of dynamical interactions and thus of the number of stellar
exotica in a GC; e.g. ). However, it is worth
noticing that M71 is located at a low Galactic latitude
(l=56.75^∘,b=-4.56^∘), likely on a disk-like orbit
<cit.>. Hence, it could have have lost a substantial
fraction of its initial mass, due to heavy interactions with the
Galactic field and to shocks caused by encounters with molecular
clouds and/or spiral arms. Moreover the structural parameters
of this cluster have been estimated from shallow optical images <cit.>,
and therefore need to be re-determined more accurately.
Hence, the value of the collisional parameter,
which directly depends on the cluster structural parameters
<cit.>, could be biased.
By taking advantage of two epoch of observations obtained with the Hubble Space Telescope (HST) and wide-field
near-infrared and optical datasets for M71, here we present the
determination of: (i) the stellar proper motions (which allow
us to distinguish cluster members from Galactic contaminants),
(ii) the absolute PM of the system (from which we estimate its
orbit within the Galaxy during the last 3 Gyr), and (iii) the
cluster gravitational center and structural parameters.
In Section <ref> we describe the procedures adopted for the data
reduction and analysis. Sections <ref> and <ref> are devoted
to the determination of relative stellar proper motions (PMs), and of
the cluster absolute PM and orbit, respectively. In Section
<ref> we present the new determination of cluster gravity
center, density profile and structural parameters from near-infrared
data and we study how the latter change if optical observations are used
instead. We also provide an estimate of the initial mass of the
system. Finally, in Section <ref> we summarize the results and discusse the
X-ray source abundance discrepancy in light of the new values of the
cluster structural parameters and the initial mass estimate.
§ OBSERVATIONS AND DATA REDUCTION
The present work is based on two different datasets. Their
characteristics and the adopted data reduction procedures are
described in the following.
High-Resolution Dataset – This has been used to determine the
stellar PMs. It consists of two sets of images acquired with the Wide Field Channel (WFC) of the
Advanced Camera for Surveys (ACS) mounted on HST (see Figure <ref> for a map of the fields of view -
FOVs - covered by these observations). This camera provides a FOV of 202×202 with a pixel scale of 0.05 pixel^-1. The first epoch data have been
collected under GO10775 (P.I.: Sarajedini) on 2006 July 1, and consist
of a set of ten dithered images, five in the F606W filter (with
exposure times: 1 × 4 s; 4 × 75 s) and five in the F814W
filter (1 × 4 s; 4 × 80 s). The second epoch is
composed of proprietary data obtained under GO12932 (P.I.: Ferraro) on
2013, August 20. It consists of a set of ten deep images acquired
through the F606W filter (2 × 459 s; 3 × 466 s; 5
× 500 s) and nine images in the F814W filter (5 × 337 s;
3 × 357 s; 1 × 440 s). The photometric analysis has been
performed on the -flc images (which are corrected for flat
field, bias, dark counts and charge transfer efficiency) following the
procedures described in detail in <cit.>. Briefly, both
the epochs have been analyzed with the publicly available program
img2xym_WFC.09x10, which uses a pre-determined model of a
spatially varying point spread function (PSF) plus a single
time-dependent perturbation PSF (to account for focus changes or
spacecraft breathing). The final output of this process are two catalogs (one for each epoch)
with instrumental magnitudes and positions for all the sources above a given threshold.
Star positions were corrected in each catalog for geometric distortion by adopting the solution provided by <cit.>.
By using the stars in common with the public catalog of
<cit.>, instrumental
magnitudes have been calibrated on the VEGAMAG system and instrumental
positions have been reported on the absolute right ascension and declination coordinate
reference system (α and δ, respectively). The final
color-magnitude diagrams (CMDs) are shown in Figure <ref> for the
two different epochs.
Wide-field Dataset – To determine the cluster gravitational
center and structural parameters, we used ground-based near-infrared
images (Prop ID: 11AD90; PI: Thanjavur) obtained with the wide field
imagers WIRCam mounted at the Canada-France-Hawaii Telescope
(CFHT). To study the effect of differential reddening, we also made
use of optical wide-field images (Prop ID: 04AC03, 03AC16; PI: Clem)
acquired with MegaCam at the same telescope. The WIRCam camera
consists of a mosaic of four chips of 2040×2040 pixels each,
with a pixel scale of 0.31 pixel^-1, providing a
total FOV of ∼21.5× 21.5. We analyzed seven
images obtained with the J and K_s filters, with exposure times of 5 s
and 24 s, respectively. A dither pattern of few arcseconds was applied
to fill the gaps among the detector chips. The MegaCam camera
consists of a mosaic of 36 chips of 2048×4612 pixels each, with
a pixel scale of 0.185 pixel^-1 providing a FOV of
∼1^∘×1^∘. A total of 50 images have been
acquired, both in the g' and in the r' bands, with exposure times of
250 s each. A dither pattern of few arcseconds was adopted for each
pointing, thus allowing the filling of most of the interchip gaps,
with the exception of the most prominent, horizontal ones. Figure
<ref> shows the map of the Wide-field dataset.
For both these sets of observations, the images were pre-processed
(i.e. bias and flat-field corrected) by means of the Elixir pipeline
developed by the CFHT team and the photometric analysis has been
performed independently on each chip by following the procedures
described in <cit.>. Briefly, by means of an iterative
procedure, an adequate number (> 20) of isolated and bright stars
has been selected in each chip and filter to model the PSF. Hence, the
PSF model has been applied to all the stellar-like sources at about
4σ from the local background by using DAOPHOT and the
PSF-fitting algorithm ALLSTAR <cit.>. For each filter and
chip, we matched the single-frame catalogs to obtain a master
list. Each master list includes the instrumental magnitudes, defined
as the weighted mean of the single image measurements reported to the
reference frame of the transformation, and the error, which is the
standard deviation of the mean. Instrumental magnitudes have been
reported to the SDSS photometric system[See
<http://www.cfht.hawaii.edu/Science/CFHTLS-DATA/megaprimecalibration.html#P2.>]
for the MegaCam catalog, and to the 2MASS system for the WIRCam
catalog. Finally the instrumental positions have been reported to
the absolute coordinate reference frame by using the stars in common
with the 2MASS catalog[Publicly available at
<http://vizier.u-strasbg.fr>]. The CMDs for these datasets are
shown in Figure <ref> for stars located at less than 300
from the center.
As can be seen from both Fig. <ref> and Fig. <ref>, the
standard evolutionary sequences are well defined. However, they are
also heavily contaminated by foreground objects, as expected from the
location of M71 close to the Galactic disk.
§ RELATIVE PROPER MOTIONS
To study the PMs of M71, we used the high resolution datasets. These
are separated by a temporal baseline of 7.274 years and because of
their different orientation, pointing and magnitude limit, the PM analysis could be performed only on the ∼5000 stars in common, located in the overlapping FOV (see Figure <ref>) and
having magnitudes 18<m_ F814W<24 (corresponding to magnitudes 19<m_ F606W<25).
We adopted the procedure described in <cit.>. Briefly, we used six parameters linear transformations[To do this we applied six parameters linear transformations using CataXcorr, a code developed by P. Montegriffo at INAF- Osservatorio Astronomico di Bologna. This package is available at http://davide2.bo.astro.it/?paolo/Main/CataPack.html, and has been successfully used in a large number of papers by our group in the past 10 years. ] to report the coordinates of the stars in each exposure to the distortion-free reference catalog of
<cit.>. Since we are interested in the stellar PMs relative to the cluster frame, these transformations have been determined by using a sample of ∼6600 stars that, in the reference catalog, are likely cluster members on the basis of their CMD position (i.e. stars located along the main sequence). Moreover, the transformations have been determined independently on each detector chip in order to maximize the accuracy. At the end of the procedure, for each of the ∼5000 stars we have up to ten position measurements in the first epoch catalog and up to nineteen in the second epoch catalog. To determine the relative PMs, we computed the mean X and Y positions of each star in
each epoch, adopting a 3σ clipping algorithm. The star PMs are
thus the difference between the mean X,Y positions evaluated in the
two epochs, divided by Δ t=7.274 years. The resulting PMs
are in units of pixels years^-1. Since the master frame is
already oriented according to the equatorial coordinate system, the
X-component of the PM corresponds to a projected PM along the
(negative) RA and the Y-component corresponds to a PM along the
Dec. Therefore, we converted our PMs in units of mas years^-1
by multiplying the previous values for the ACS pixel scale (0.05 pixel^-1), and we named μ_αcos(δ) and μ_δ
the PMs along the RA and Dec directions, respectively. To maximize
the quality of our results, we built a final PM catalog by taking
into account only stars for which at least three position measurements
are available in each epoch. At the end of the procedure we counted
4938 stars with measured PMs. The errors in the position of the stars
in each epoch (σ_1,2^α,δ) have been
calculated as the standard deviation of the measured positions around
the mean value. Then the errors in each component of the PM have been
assumed as the sum in quadrature between the error in the first and
second epoch: σ_ PM^α,δ = √(
(σ_1^α,δ)^2 + (σ_2^α,δ)^2 )/
Δ t. The errors as a function of the star magnitudes are shown
in Figure <ref>. For both the PM directions, the typical
uncertainty for stars with m_F814W<21 is less of ∼
0.07 mas yr^-1, demonstrating the good quality of our
measurements. Following <cit.>, we also verified that our PM measurements are not affected by chromatic effects, i.e., there is no dependence of μ_αcos(δ) and μ_δ on the (F606W-F814W) color.
Finally, our PM measurements are not even affected by positional
effects, i.e., there is no dependence of the derived PMs on the
instrumental (X,Y) positions.
In Figure <ref> we show the PM distribution in the vector
points diagram (VPD). As can be seen, the VPD is dominated by two
prominent features: the clump in the center with zero relative PM is,
by definition, dominated by the cluster population, while the
elongated sparse distribution of points extending beyond this clump is
dominated by contaminating field stars, mostly from the Galactic
disk. At first inspection of the VPD we can see that only ≲
60% of the ∼5000 analyzed stars are likely cluster members. A high
percentage of field contamination is indeed expected in the case of
M71, since it has a quite low stellar density and is located in a
crowded field at low Galactic latitudes. By selecting in the VPD the
likely cluster members (i.e. the stars with relative PM around 0 both
in RA and in Dec) we find that the mean motion is 0.01 mas
yr^-1 with a standard deviation of 0.1 mas yr^-1 both
in α and in δ, thus, as expected, consistent with zero.
The effect of decontaminating the CMD from field stars is shown in
Figure <ref>, where we have separated the objects with PM≲
0.6 mas yr^-1 (likely cluster members), from those with larger
PMs. The selection of stars in the VPD is shown, per bins of one
magnitude, in the left-hand column of the figure, with the objects
having PM≲ 0.6 mas yr^-1 encircled in red. The effect on
the CMD is shown in the other three columns: from left to right, the
observed CMD, the CMD of cluster members only, and the CMD of field
stars. In the latter, it is well appreciable the main sequence of the
Galactic field. Instead, the decontaminated CMD clearly shows a sharp
and well defined main sequence, also revealing the binary sequence.
The few stars on the blue side of the main sequence could be cluster
exotic objects, such as cataclysmic variables, X-ray binaries or
millisecond pulsars <cit.>,
where a main sequence companion star is heated by a compact object. Nonetheless, we cannot completely rule out the
possibility that some of these stars are field objects with PMs
compatible with those of the cluster members.
§ ABSOLUTE PROPER MOTIONS
To transform the relative PMs into absolute ones, we used
background galaxies as reference, since they have negligible PMs due
to their large distances. This method has been successfully used in
several previous works <cit.>. Unfortunately, the NASA Extragalactic Database report no
sources in the FOV used for the PM estimate. Thus, we carefully
inspected our images in order to search for diffuse galaxy-like
objects. We found four galaxies with central point-like structure and
relative high brightness, which allowed us to precisely determine
their centroid position. Although many other galaxies are present in
the FOV, they have no point-like structure or are too faint to allow
the determination of a reasonable PM value. Moreover, as part of a
project aimed at searching for optical counterparts to X-ray sources,
we identified two promising active galactic nuclei (AGN) candidates. Two Chandra X-ray sources,
named s05 and s41 in <cit.>, have high energy and optical
properties that can be attributed either to AGNs or to cataclysmic
variables <cit.>. In order to
distinguish between these two possibilities, we analyzed their PMs. We
reported our relative PM reference frame to the absolute cluster PM
(μ_αcosδ, μ_δ=-3.0±1.4, -2.2±1.4
mas yr^-1) previously determined by <cit.> and found
that these two sources have an absolute PM significantly different
from the cluster motion and compatible with zero. We therefore
conclude that these two objects are likely background AGNs[Of
course, these sources could be foreground cataclysmic variables with
PMs almost perfectly aligned with our line of sight, but this
possibility seems to be quite unlikely.] and add them to the list of
objects used to determine our reference absolute zero point. The six
selected objects are located very close to each other in the VPD, as
expected for extragalactic objects, and their finding charts are shown
in Figure <ref>. We defined the absolute zero point as the
weighted mean of their relative PMs and assumed as error the
uncertainty on the calculated mean. By anchoring this mean position to
the (0,0) mas yr^-1 value, we find that the absolute PM of M71
is:
(μ_αcosδ, μ_δ)=(-2.7±0.5,
-2.2±0.4) mas yr^-1.
This value is in good agreement with (but more accurate than) the
previous determination <cit.>, and it remains unchanged
within the errors even if the two candidate AGNs are excluded from the
analysis: in that case we get: (μ_αcosδ,
μ_δ)=(-2.4±0.6, -1.9±0.1) mas yr^-1, still
in agreement with the previous results. The VPD in the absolute frame
is plotted in Figure <ref>, with the red and green crosses and
circles marking, respectively, the absolute PM and its uncertainty as
determined in this study and as quoted in <cit.>.
Since every absolute PM measurement is strictly dependent on the
accuracy of the absolute reference frame, we need to verify the
possible presence of systematic errors in its determination. One of
the possible source of systematic errors is the rotation of the GC on
the plane of the sky. Indeed, since we used only cluster stars to
define the relative reference frame, if the GC is rotating, then our
frame will be rotating too. This would introduce an artificial
rotation to background and foreground objects around the cluster
center. To quantify this possible effect we followed the procedure described in <cit.>. We selected a sample of field stars as those that in the VPD of Figure <ref> have relative PMs larger than 0.8 mas yr^-1. Then we decomposed their PM vectors into a
radial and tangential component with respect to the cluster center. If the GC is rotating,
we would expect to find a clear dependence of the PM tangential component on the distance from the cluster center. Such a dependence is however excluded by our results, thus that the internal regions of M71 are not rotating, in agreement with the recent findings by <cit.>.
We also compared the field star motion to that expected
from theoretical Galactic models in the analyzed FOV. To evaluate the
field mean motion, we followed the procedure described in
<cit.>. First, we excluded the stars within 0.8 mas
yr^-1 from the cluster mean motion. Then we iteratively removed
field stars in a symmetric position with respect to the GC exclusion
region and evaluated the weighted mean motion by applying a 3σ
algorithm. We found (μ_αcosδ,
μ_δ)=(-2.0±0.2, -4.3±0.2) mas yr^-1. We
compared these values with those predicted for the same region of the
sky in the Besançon Galactic model <cit.>, simulating a
sample of ∼2000 artificial stars distant up to 15 kpc from the
Galactic center, in a FOV centered on M71, covering a solid angle of
∼11, and having V magnitudes ranging 12 from to 25. The
predicted field mean motion is (μ_αcosδ,
μ_δ)=(-2.4, -4.7) mas yr^-1, in good agreement
with our results.
§.§ The cluster orbit
The GC absolute PM, combined with the radial velocity
v_r=-23.1±0.3 km s^-1 from <cit.>, can be used to
determine 3D space velocity of the cluster in a Cartesian
Galactocentric rest frame. Using the formalism described in
<cit.>, assuming the Local Standard of Rest velocity equal
to 256 km s^-1 <cit.> and using the value of the Sun velocity
with respect to it from <cit.>, we obtained (v_x,v_y,v_z )=(52±10,204±6,31±12) km
s^-1, where the major source of uncertainty is the GC absolute PM
error.
We then used the 3D velocity of the cluster and its current
Galactocentric position[We adopted the Galactic coordinates
quoted in <cit.> and the convention in which the X axis
points opposite to the Sun, i.e., the Sun position is (-8.4,0,0), where the distance of the Sun from the Galactic center is from <cit.>.]
(x,y,z )=(-6.2±0.6,3.4±0.3,-0.32±0.03
) kpc, to reconstruct its orbit in the axisymmetric potential
discussed in <cit.>, which has been extensively used to study
the kinematics of Galactic stellar systems
<cit.>. The orbit was
time-integrated backwards, starting from the current conditions and
using a second-order leapfrog integrator <cit.>
with a small time step of ∼100 kyr. Since the adopted Galactic
potential is static, we choose to back-integrate the orbit only for 3
Gyr, since longer backward integrations become uncertain due to their
dependence on the Galactic potential variations as a function of
time. This numerical integration required about 32000 steps and
reproduced ∼20 complete cluster orbits. The errors on the
conservation of the energy and the Z-component of the angular momentum
never exceeded one part over 10^9 and 10^16, respectively. We
generated a set of 1000 clusters starting from the phase-space initial
conditions normally distributed within the uncertainties. For all of
these clusters we repeated the backward time
integration. Figure <ref> shows the resulting cluster orbits in
the equatorial and meridional Galactocentric plane. As can be seen,
the cluster has a low-latitude disk-like orbit within the Galactic
disk. Indeed in the equatorial plane it reaches a maximum distance of
∼8 kpc from the Galactic center and a minimum distance of ∼5
kpc. Thus, it orbits around the assumed spheroidal bulge, never
crossing it. Moreover, it persists on a low-latitude orbit, with a
typical height from the Galactic plane of about ±0.4 kpc, thus
again confined within the disk. The estimated orbits indicate that, at
least during the last 3 Gyr, M71 tightly interacted with the inner
Galactic disk. With respect to the large majority of Galactic GCs, which are on large orbits across the (low-density) halo, these interactions likely induced heavy mass-loss <cit.> in M71, thus supporting the possibility that it lost a significant fraction of its initial mass, as already suggested by its flat mass function
<cit.>. Moreover, such a heavy mass-loss could finally
explain why M71 harbors a large population of X-ray sources, in spite
of its present low mass <cit.>.
§ GRAVITATIONAL CENTER, STRUCTURAL PARAMETERS AND INITIAL MASS
In this section we present the determination of the gravitational
center and of the new structural parameters of M71.
§.§ Gravitational center
To avoid biases due to the strong differential reddening affecting the
system <cit.>, for the determination of the
cluster center of gravity C_ grav we used the near-infrared
WIRCam catalog, which has the same level of completeness of the ACS
one in the magnitude range 14 < K_s < 16.8. C_ grav has been determined following an iterative procedure that, starting from a first-guess center, selects a sample of stars within a circle of radius r and re-determine the center as the average of the star coordinates (α and δ). The procedure stops when convergence is reached, i.e., when the newly-determined center coincides with the previous ones within an adopted tolerance limit <cit.>.
For M71,
which is a relative loose GC
<cit.>, we repeated the procedure eighteen times, using
different values of r and selecting stars in different magnitude
ranges, chosen as a compromise between having high enough statistics
and avoiding spurious effects due to incompleteness and saturation.
In particular, the radius r has been chosen in the range
140-160 with a step of 10, thus guaranteeing
that it is always larger than the literature core radius
r_c=37.8 <cit.>. For each radius r, we have
explored six magnitude ranges, from K_s>14 (in order to exclude stars
close to the saturation limit), down to K_s=16.3-16.8, in steps of 0.1
magnitudes. As first-guess center we used that quoted by
<cit.>. The final value adopted as C_ grav is the
mean of the different values of RA and Dec obtained in the eighteen
explorations, and its uncertainty is their standard deviation. We
found RA=19^h53^m46.106^s and
Dec=+18^∘4643.38, with an uncertainty of about
1.7. The newly determined center of M71 is ∼ 5.7
west and ∼ 0.3. north from the one measured from optical
ACS data by <cit.>. Such a discrepancy is likely
ascribable to an effect of differential reddening impacting the
optical determination.
§.§ Stellar density profile
Since the surface brightness profile can suffer from strong biases and
fluctuations due to the presence of few bright stars <cit.>, in order to re-evaluate the
structural parameters of M71 we used direct star counts. The
determination of the stellar density profile (number of stars per unit
area, in a series of concentric annuli around C_ grav) has been
performed following the procedure fully described in
<cit.>. Also in this case, in order to minimize the
differential reddening effect we used the near-infrared WIRCam data,
which covers distances out to ∼ 1000 from C_ grav
in the south-west portion of the cluster (see Fig. <ref>).
To build the density profile we considered 13 concentric annuli around
C_ grav, each one divided into four sub-sectors. We then
counted the number of stars with 14<K_s<16.5 in each sub-sector and
divided it by the sub-sector area. The projected stellar density in
each annulus is the mean of the values measured in each sub-sector and
the uncertainty has been estimated from the variance among the
sub-sectors. The stellar background has been estimated by averaging
the outermost values, where the profile flattens, and it has been
subtracted to the observed distribution to obtain decontaminated
density profiles. The result is shown in Figure <ref>.
The cluster structural parameters has been derived by fitting the
observed density profiles with a spherical, isotropic, single-mass
<cit.> model.[These models can be generated and
freely downloaded from the Cosmic-Lab web site:
<http://www.cosmic-lab.eu/Cosmic-Lab/Products.html>. The fitting
procedure is fully described in <cit.>]. The single-mass
approximation is justified by the fact that the magnitude range chosen
to build the profile includes cluster stars with negligible mass
differences. The best-fit model results in a cluster with a King
dimensionless potential W_0=5.55± 0.35, a core radius
r_c=56.2^+4.5_-4.0 arcsec, a half-mass radius r_h=
146.2^+11.5_-10.0 arcsec, a truncation radius r_t=
871.8^+247_-164 arcsec and, thus, a concentration parameter,
defined as the logarithm of the truncation to the core radius,
c=log r_t/r_c=1.19.
There is a significant difference between these parameters and those quoted in the <cit.> catalog, originally estimated by <cit.> from a surface brightness profile obtained from shallow optical images: rc = 37.8, r_h = 100.2 and r_t = 533.9 (the latter being derived from the quoted value of the concentration parameter: c = 1.15). To further investigate this discrepancy, we built the cluster surface brightness profile using a K-band 2MASS image, and we found that it is in agreement with the number density profile shown in Figure <ref>, thus further reinforcing the reliability of the newly-determined parameters. On the other hand, if we take into account only the brightest pixels of the K-band image, we find a surface brightness profile consistent with the literature one. This implies that the structural parameters quoted in the literature (which are determined from the light of the most luminous giants only) are not representative of the overall cluster profile.
The availability of a very wide (∼1^∘×1^∘)
sample at optical wavelengths (the MegaCam dataset) with an analogous
level of completeness (comparable to the ACS one for 13<g'<19)
allowed us to investigate how the derivation of the cluster stellar number
density profile from optical observations can be affected by the
presence of large differential extinction. Figure <ref>
compares the extinction map and the 2D density map of the
1500× 1500 region of the sky centered on M71. The
former is obtained from <cit.> and shows that the color
excess E(B-V) varies from ∼ 0.24 to ∼ 0.54, with several “spots” and a clear gradient across
the field. The density map in the right-hand panel shows the number of
stars with 13<g'<19, per unit area, detected in the MegaCam
sample. As expected, at large scales it reveals a direct
correspondence with the extiction map: in particular, the stellar
density manifestly drops in the north-west sector, where the color
excess is the highest, while the opposite is true in the south-east
part of the cluster. Obviously, this is expected to significantly
impact the density profile obtained from star counts in the optical
bands.
To quantitavely test this effect, we determined the cluster density
profile by using the MegaCam (optical) data. The result is plotted in
Figure <ref> and shows that, indeed, the stuctural parameters
of the best-fit King model turn out to be very different from those
obtained from the near-infrared (almost reddening-unaffected) dataset
(compare with Fig. <ref>). In particular, the concentration
parameter is much larger (c=1.6), as a consequence of a comparable
core radius (r_c=58 versus 56.2), but a more
than doubled truncation radius (r_t=2347.8 versus
871.8). Such a severe over-estimate of r_t is due to the
high extinction affecting the external portions of the MegaCam sample,
where the Galactic field background is evaluated, and it clearly
demonstrates how important is to take differential reddening under
control for the determination of a cluster density profile.
§.§ Cluster initial mass
In Sect. <ref> we have argued that M71 likely lost a
significant fraction of its original mass, mostly due to environmental
effects. In this section, we attempt to estimate the total cluster
initial mass. Although many recipes can be used to this aim
<cit.>, we adopted the simple analytical approach
described in <cit.>. It describes the way a cluster
loses its mass due to the effects of both stellar and dynamical
evolution (including processes such as interactions with the Galactic
tidal field and shocks due to encounters with giant molecular clouds
or spiral arms). Although this method has been developed specifically
for open clusters, it can be used also in the case of M71, since its
current mass (M=2.0^+1.6_-0.9×10^4 M_⊙; from
) and orbit are consistent with those typical of
open clusters <cit.>. The initial mass M_ ini of the
cluster can be expressed as follows:
M_ ini≃[ ( M/M_⊙)^γ +
γ t/t_0]^1/γ[
1-q_ev(t)]^-1,
where M is the cluster current mass, t=12±1 Gyr is the cluster
age <cit.>, t_0 is the dissolution time-scale parameter,
γ is a dimensionless index and q_ev (t) is a function
describing the mass-loss due to stellar evolution. The dissolution
time-scale parameter is a constant describing the mass-loss process,
which depends on the strength of the tidal field. Small values of
t_0 are typically associated with encounters with molecular clouds
and spiral arms, while larger values are used to describe the effect
of the Galactic tidal field <cit.>. Since M71 has an
orbit and a structure quite similar to those of open clusters, we
assumed t_0 in the same range of values (2.3<t_0<4.7 Myr)
constrained in <cit.>. The parameter γ depends on
the cluster initial density distribution and is usually constrained by
the value of the King dimensionless potential W_0. We adopted γ = 0.62,
corresponding to W_0 = 5, a typical value for an averagely concentrated cluster.
The function q_ev (t), which describes the
mass-loss process due to stellar evolution, can be approximated by the
following analytical expression:
log q_ev(t) = (log t-a)^b +c, for t > 12.5
Myr,
where a, b and c are coefficients that depend on the cluster
metallicity. The iron abundance ratio of M71 is [Fe/H]=-0.73
<cit.>, which corresponds to a=7.03, b=0.26 and
c=-1.80 <cit.>.
The resulting initial mass of the cluster is shown in Figure
<ref> as a function of the explored range of values of t_0. It
varies between 1.8 and 6.8×10^5 M_⊙, which are all values
typical of the mass of Galactic halo GCs, and is one order of
magnitude (or more) larger than the current mass. Also considering
the largest possible value of t_0 (∼30 Myr; see
), we find that the cluster initial mass is at least
twice its current value. Clearly, this estimate is based on a
simplified approach and on parameters derived by the average behaviors
of open clusters, and different assumptions may lead to different
results. However, it is interesting to note that, while such a high mass loss would be unlikely for a halo GC, it can be reasonable for a system moving along an orbit confined within the disk (see Sect. <ref>).
§ SUMMARY AND CONCLUSIONS
By using two high-resolution ACS datasets separated by a temporal baseline of ∼7 years, we determined the relative PMs of ∼5000 individual stars in the direction of the low-mass GC M71, finding that only ∼60% of them have PMs consistent with being members of the cluster. The identification of four galaxies and two AGNs within the
sampled FOV, allowed us to also constrain the absolute PM of M71. This
has been used to infer the orbit of the cluster within the Galactic
potential well, which has been modeled by using a three-component
axisymmetric analytic model. It turned out that, at least during the last 3 Gyr, M71 has been in a disk-like orbit confined within the Galactic disk. It therefore seems reasonable to suppose that M71 suffered a number of dynamical processes (e.g., with the dense surrounding environment, the Milky Way spiral arms, various molecular clouds) that made it lose an amount of mass significantly larger than what expected for the majority of Galactic GCs, which are on halo-like orbits.
We re-determined the gravitational center
and density profile of M71 by using resolved star counts from a
wide-field near-infrared catalog obtained with WIRCam at the
CFHT. This allowed us to minimize the impact of the large and
differential reddening affecting the system. With respect to the
values quoted in the literature (which have been determined from
optical data), we found the cluster centre to be located almost
6 to the west, a ∼ 50% larger core and half-mass radii.
Finally, we used a simplified analytical approach to take into account
mass-loss processes due to stellar and dynamical evolution, and thus
estimate the initial cluster mass, finding that the system likely was
one order of magnitude more massive than its current value.
As discussed in Sect. <ref>, M71 is known to harbor a rich
population of X-ray sources <cit.>, in a number that exceeds
the predictions based on the values of its mass and its collision
parameter Γ <cit.>. Since this latter depends on the
cluster central luminosity density and core radius (Γ∝ρ_0^1.5 r_c^2; ), we have
re-evaluated it by using the newly determined structural parameters.
By adopting the central surface brightness quoted in <cit.>
and equation (4) in <cit.>, we found logρ_0=2.60 (in
units of L_⊙/pc^-3). From this quantity and the value of
r_c here determined, the resulting value of Γ is about half
the one quoted in <cit.>, and the discrepancy in terms of the
expected number of X-ray sources aggravates. Instead, the much larger
initial mass here estimated for the system would be able to naturally
account for the currently observed X-ray population, thus reinforcing
the hypothesis that M71 lost a large fraction of stars during its
orbit. An accurate investigation of the possible presence of tidal
tails around the cluster would be important to confirm such a
significant mass-loss from the system. However, this is currently
hampered by the large differential reddening affecting this region of
the sky, and a wide-field infrared observations are urged to shed
light on this issue.
§ ACKNOWLEDGEMENT
We warmly thank the referee, whose useful comments improved the quality of the manuscripts.
[Allen & Santillan(1991)]allen91 Allen, C., & Santillan, A. 1991, , 22, 255
[Anderson & King(2006)]anderson06 Anderson, J., & King,
I. R. 2006, Instrument Science Report ACS 2006-01, 34 pages, 1
[Anderson et al.(2008)]anderson08 Anderson, J., Sarajedini, A., Bedin, L. R., et al. 2008, , 135, 2055
[Anderson & van der Marel(2010)]anderson10 Anderson, J., & van der Marel, R. P. 2010, , 710, 1032
[Bahramian et al.(2013)]bahramian13 Bahramian, A., Heinke, C. O., Sivakoff, G. R., & Gladstone, J. C. 2013, , 766, 136
[Bellini et al.(2009)]bellini09 Bellini, A., Piotto, G., Bedin, L. R., et al. 2009, , 493, 959
[Bellini et al.(2010)]bellini10 Bellini, A., Bedin, L. R., Pichardo, B., et al. 2010, , 513, AA51
[Bellini et al.(2014)]bellini14 Bellini, A., Anderson, J., van der Marel, R. P., et al. 2014, , 797, 115
[Cadelano et al.(2015)]cadelano15 Cadelano, M., Pallanca, C., Ferraro, F. R., et al. 2015, , 807, 91
[Cohn et al.(2010)]cohn10 Cohn, H. N., Lugger, P. M., Couch, S. M., et al. 2010, , 722, 20
[Cudworth(1985)]cudworth85 Cudworth, K. M. 1985, , 90, 65
[Cudworth & Hanson(1993)]cudworth93 Cudworth, K. M., & Hanson, R. B. 1993, , 105, 168
[Dalessandro et al.(2009)]dalessandro09 Dalessandro, E., Beccari, G., Lanzoni, B., et al. 2009, , 182, 509
[Dalessandro et al.(2013)]dalessandro13 Dalessandro, E., Ferraro, F. R., Massari, D., et al. 2013, , 778, 135
[Dalessandro et al.(2015)]dalessandro15 Dalessandro, E., Miocchi, P., Carraro, G., Jílková, L., & Moitinho, A. 2015, , 449, 1811
[Djorgovski(1993)]djorg93 Djorgovski, S. 1993, Structure and Dynamics of Globular Clusters, 50, 373
[De Marchi et al.(2007)]demarchi07 De Marchi, G., Paresce, F., & Pulone, L. 2007, , 656, L65
[Di Cecco et al.(2015)]dicecco15 Di Cecco, A., Bono, G., Prada Moroni, P. G., et al. 2015, , 150, 51
[Dinescu et al.(1999)]dinescu99 Dinescu, D. I., van Altena, W. F., Girard, T. M., & López, C. E. 1999, , 117, 277
[Elsner et al.(2008)]elsner08 Elsner, R. F., Heinke, C. O., Cohn, H. N., et al. 2008, , 687, 1019
[Ferraro et al.(1997)]ferraro97 Ferraro, F. R., Paltrinieri, B., Fusi Pecci, F., et al. 1997, , 324, 915
[Ferraro et al.(2001)]ferraro01 Ferraro, F. R., D'Amico, N., Possenti, A., Mignani, R. P., & Paltrinieri, B. 2001, , 561, 337
[Ferraro et al.(2003)]ferraro03 Ferraro, F. R., Sills, A., Rood, R. T., Paltrinieri, B., & Buonanno, R. 2003, , 588, 464
[Ferraro et al.(2009)]ferraro09b Ferraro, F. R., Beccari, G., Dalessandro, E., et al. 2009, , 462, 1028
[Ferraro et al.(2012)]ferraro12 Ferraro, F. R., Lanzoni, B., Dalessandro, E., et al. 2012, , 492, 393
[Ferraro et al.(2015)]ferraro15 Ferraro, F. R., Lanzoni, B., Dalessandro, E., Mucciarelli, A., & Lovisi, L. 2015, Ecology of Blue Straggler Stars, 99
[Geffert & Maintz(2000)]geffert00 Geffert, M., & Maintz, G. 2000, , 144, 227
[Goldsbury et al.(2010)]goldsbury10 Goldsbury, R., Richer, H. B., Anderson, J., et al. 2010, , 140, 1830
[Goodman & Hut(1989)]goodman89 Goodman, J., & Hut, P. 1989, , 339, 40
[Harris(1996)]harris96 Harris, W. E. 1996, , 112, 1487
[Heinke et al.(2005)]heinke05 Heinke, C. O., Grindlay, J. E., Edmonds, P. D., et al. 2005, , 625, 796
[Hockney & Eastwood(1988)]hockney88 Hockney, R. W., & Eastwood, J. W. 1988, Bristol: Hilger, 1988,
[Huang et al.(2010)]huang10 Huang, R. H. H., Becker, W., Edmonds, P. D., et al. 2010, , 513, A16
[Kimmig et al.(2015)]kimmig15 Kimmig, B., Seth, A., Ivans, I. I., et al. 2015, , 149, 53
[King(1966)]king66 King, I. R. 1966, , 71, 64
[Johnson & Soderblom(1987)]johnson87 Johnson, D. R. H., & Soderblom, D. R. 1987, , 93, 864
[Lamers et al.(2005)]lamers05 Lamers, H. J. G. L. M., Gieles, M., Bastian, N., et al. 2005, , 441, 117
[Lamers & Gieles(2006)]lamers06 Lamers, H. J. G. L. M., & Gieles, M. 2006, , 455, L17
[Lanzoni et al.(2007)]lanzoni07 Lanzoni, B., Dalessandro, E., Ferraro, F. R., et al. 2007, , 668, L139
[Lanzoni et al.(2010)]lanzoni10 Lanzoni, B., Ferraro, F. R., Dalessandro, E., et al. 2010, , 717, 653
[Massari et al.(2013)]massari13 Massari, D., Bellini, A., Ferraro, F. R., et al. 2013, , 779, 81
[Massari et al.(2015)]massari15 Massari, D., Dalessandro, E., Ferraro, F. R., et al. 2015, , 810, 69
[Meylan & Heggie(1997)]meylan97 Meylan, G., & Heggie, D. C. 1997, , 8, 1
[Miocchi et al.(2013)]miocchi13 Miocchi, P., Lanzoni, B., Ferraro, F. R., et al. 2013, , 774, 151
[Montegriffo et al.(1995)]montegriffo95 Montegriffo, P., Ferraro, F. R., Fusi Pecci, F., & Origlia, L. 1995, , 276, 739
[Moreno et al.(2014)]moreno14 Moreno, E., Pichardo, B., & Velázquez, H. 2014, , 793, 110
[Ortolani et al.(2011)]ortolani11 Ortolani, S., Barbuy, B., Momany, Y., et al. 2011, , 737, 31
[Pallanca et al.(2010)]pallanca10 Pallanca, C., Dalessandro, E., Ferraro, F. R., et al. 2010, , 725, 1165
[Peterson & Reed(1987)]peterson97 Peterson, C. J., & Reed, B. C. 1987, , 99, 20
[Phinney(1993)]phinney93 Phinney, E. S. 1993, Structure and Dynamics of Globular Clusters, 50, 141
[Pooley et al.(2003)]pooley03 Pooley, D., Lewin, W. H. G., Anderson, S. F., et al. 2003, , 591, L131
[Ransom et al.(2005)]ransom05 Ransom, S. M., Hessels, J. W. T., Stairs, I. H., et al. 2005, Science, 307, 892
[Rasio et al.(2007)]rasio07 Rasio, F. A., Baumgardt, H., Corongiu, A., et al. 2007, Highlights of Astronomy, 14, 215
[Reid et al.(2009)]reid09 Reid, M. J., Menten, K. M., Zheng, X. W., et al. 2009, , 700, 137-148
[Robin et al.(2003)]robin03 Robin, A. C., Reylé, C., Derrière, S., & Picaud, S. 2003, , 409, 523
[Sarajedini et al.(2007)]sarajedini07 Sarajedini, A., Bedin, L. R., Chaboyer, B., et al. 2007, , 133, 1658
[Schlegel et al.(1998)]schlegel98 Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, , 500, 525
[Schönrich et al.(2010)]schonrich10 Schönrich, R., Binney, J., & Dehnen, W. 2010, , 403, 1829
[Stetson(1987)]stetson87 Stetson, P. B. 1987, , 99, 191
[Verbunt & Hut(1987)]verbunt87 Verbunt, F., & Hut, P. 1987, The Origin and Evolution of Neutron Stars, 125, 187
[Vesperini & Heggie(1997)]vesperini97 Vesperini, E., & Heggie, D. C. 1997, , 289, 898
[Vesperini et al.(2013)]vesperini13 Vesperini, E., McMillan, S. L. W., D'Antona, F., & D'Ercole, A. 2013, , 429, 1913
[Watkins et al.(2015)]watkins15 Watkins, L. L., van der Marel, R. P., Bellini, A., & Anderson, J. 2015, , 803, 29
|
http://arxiv.org/abs/1701.07519v1 | 20170125233458 | SO*(2N) coherent states for loop quantum gravity | [
"Florian Girelli",
"Giuseppe Sellaroli"
] | math-ph | [
"math-ph",
"gr-qc",
"hep-th",
"math.MP"
] |
1
|
http://arxiv.org/abs/1701.08108v1 | 20170127163745 | Existence of Evolutionarily Stable Strategies Remains Hard to Decide for a Wide Range of Payoff Values | [
"Themistoklis Melissourgos",
"Paul Spirakis"
] | cs.CC | [
"cs.CC",
"cs.GT",
"68Q01"
] |
Existence of ESS remains hard to decide for a wide range of payoff values
Department of Computer Science, University of Liverpool,
Ashton Street, Liverpool L69 3BX, United Kingdom
Existence of Evolutionarily Stable Strategies Remains Hard to Decide for a Wide Range of Payoff Values
Themistoklis Melissourgos
Paul Spirakis
December 30, 2023
======================================================================================================
The concept of an evolutionarily stable strategy (ESS), introduced by Smith and Price <cit.>, is a refinement of Nash equilibrium in 2-player symmetric games in order to explain counter-intuitive natural phenomena, whose existence is not guaranteed in every game. The problem of deciding whether a game possesses an ESS has been shown to be Σ_2^P-complete by Conitzer <cit.> using the preceding important work by Etessami and Lochbihler <cit.>. The latter, among other results, proved that deciding the existence of ESS is both NP-hard and coNP-hard. In this paper we introduce a reduction robustness notion and we show that deciding the existence of an ESS remains coNP-hard for a wide range of games even if we arbitrarily perturb within some intervals the payoff values of the game under consideration. In contrast, ESS exist almost surely for large games with random and independent payoffs chosen from the same distribution <cit.>.
§ INTRODUCTION
§.§ Concepts of Evolutionary Games and Stable Strategies
Evolutionary game theory has proven itself to be invaluable when it comes to analysing complex natural phenomena. A first attempt to apply game theoretic tools to evolution was made by Lewontin <cit.> who saw the evolution of genetic mechanisms as a game played between a species and nature. He argued that a species would adopt the “maximin” strategy, i.e. the strategy which gives it the best chance of survival if nature does its worst. Subsequently, his ideas where improved by the seminal work of Smith and Price in <cit.> and Smith in <cit.> where the study of natural selection's processes through game theory was triggered. They proposed a model in order to decide the outcome of groups consisting of living individuals, conflicting in a specific environment.
The key insight of evolutionary game theory is that a set of behaviours depends on the interaction among multiple individuals in a population, and the prosperity of any one of these individuals depends on that interaction of its own behaviour with that of the others. An evolutionarily stable strategy (ESS) is defined as follows: An infinite population consists of two types of infinite groups with the same set of pure strategies; the incumbents, that play the (mixed) strategy s and the mutants, that play the (mixed) strategy t ≠ s. The ratio of mutants over the total population is ϵ. A pair of members of the total population is picked uniformly at random to play a finite symmetric bimatrix game Γ with payoff matrix A_Γ. Strategy s is an ESS if for every t ≠ s there exists a constant ratio ϵ_t of mutants over the total population, such that, if ϵ < ϵ_t the expected payoff of an incumbent versus a mutant is strictly greater than the expected payoff of a mutant versus a mutant. For convenience, we say that “s is an ESS of the game Γ”.
The concept of ESS tries to capture resistance of a population against invaders. This concept has been studied in two main categories: infinite population groups and finite population groups. The former was the one where this Nash equilibrium refinement was first defined and presented by <cit.>. The latter was studied by Schaffer <cit.> who shows that the finite population case is a generalization of the infinite population one. The current paper deals with the infinite population case which can be mathematically modelled in an easier way and in addition, its results may provide useful insight for the finite population case.
An example. In order for the reader to conceive the notion of the evolutionarily stable strategy, we give a most explanatory example of the infinite population case. Let us consider a particular species of crab and suppose that each crab's fitness in a specific environment is mainly decided by its capability to find food and use the nutrients from the food in an efficient way. In our crab population a particular mutation makes its appearance, so the crabs born with the mutation grow a significantly larger body size. We can picture the population now, consisting of two distinct kinds of crabs; ϵ fraction of the population being the large ones and 1-ϵ being the small ones. The large crabs, in fact, have difficulty maintaining the metabolic requirements of their larger body structure, meaning that they need to divert more nutrients from the food they eat and as a consequence, they experience a negative effect on fitness. However, the large crabs have an advantage when it comes to conflicting with the small ones, so they claim an above-average share of the food. To make our framework simple, we will assume that food competition involves pairs of crabs, drawn at random, interacting with each other once, but the reasoning of the analysis is equivalent to interactions that occur (simultaneously or not) between every possible pair, with each individual receiving the mean of the total fitness. When two crabs compete for food, we have the following “rules” that apply: (1) When crabs of the same body size compete, they get equal shares of the food. (2) When a large crab competes with a small crab, the large one gets the majority of the food. (3) In all cases, large crabs experience less of a fitness benefit from a given quantity of food, since some of it is diverted into maintaining their expensive metabolism. (4) When two large crabs compete they experience even less of a fitness benefit as they put considerable effort in fighting. The following bimatrix encloses the rules above in the context of a game.
22[Crab 1][Crab 2]
Small Large
Small 7,7 1,9
Large 9,1 4,4
In this setting, we call a given strategy evolutionarily stable if, when the whole population is using this strategy, any small enough group of invaders using a different strategy will eventually die off over multiple generations. This idea is captured in terms of numerical payoffs by saying that, when the entire population is using a strategy s, then an arbitrarily small ratio of invaders over the new (blended) population will have strictly lower fitness than the initial population has in the new population. Since fitness translates into reproductive success, and consequently transmitting ones genes to future generations at higher frequencies, strictly lower fitness is assumed by evolutionary principles <cit.> that the reason for a subpopulation (like the users of strategy t) to shrink over time through multiple generations and eventually become extinct.
Let us see if any of the two pure strategies is evolutionarily stable. Suppose a population of small crabs gets invaded by a group of large ones (of ratio ϵ over the whole population). The expected payoff (fitness) of a small crab is:
7(1-ϵ) + 1ϵ = 7 - 6ϵ because it meets a small crab with probability
1-ϵ and a large one with probability ϵ.
The expected payoff of a large crab is:
9(1-ϵ) + 4ϵ = 9 - 5ϵ because it meets a small crab with probability
1-ϵ and a large one with probability ϵ.
Clearly, no ϵ can make the payoff of the small crabs greater than that of the large ones. So, the pure strategy Small is not an ESS. Now suppose a population of large crabs gets invaded by a group of small ones (of ratio ϵ over the whole population). The expected payoff (fitness) of a large crab is:
4(1-ϵ) + 9ϵ = 4 + 5ϵ because it meets a large crab with probability
1-ϵ and a small one with probability ϵ.
The expected payoff of a small crab is:
1(1-ϵ) + 7ϵ = 1 + 6ϵ because it meets a large crab with probability
1-ϵ and a small one with probability ϵ.
In this case, for every ϵ∈ (0,1) the payoff of the large crabs is greater than that of the small ones. So, the pure strategy Large is an ESS.
The concept of ESSs can also be extended to mixed strategies. We can think of three natural ways to interpret the notion of probability assignment on the pure strategies of a population. One is, each individual is preprogrammed (through its DNA) to play just a specific pure strategy from a set of strategies and we say that individuals with the same pure strategy are of the same type. The group of individuals can be considered to behave as a player with a mixed strategy, defined as a probability vector over the pure strategies used by the group. Each pure strategy's probability equals the ratio of its type's members over the total population (type's frequency), because of the simple assumption made, that when two groups conflict one individual from each group is drawn equiprobably to play a bimatrix game. Another one is, each individual is preprogrammed to play a particular mixed strategy. Thus, whoever is drawn will play the specific mixed strategy. The last one is the most general way to think of it, as a blend of the former cases. A group's mixed strategy is defined by its probabilities over the available pure strategies. As soon as one individual is equiprobably picked from each group, the probability over a pure strategy of a group is determined by the sum of the probability each type is picked times the probability this type plays the specific pure strategy. Referring to our previous example, the following three infinite populations of crabs are equivalent: (i) One with 2/3 of type Small and 1/3 of type Large. (ii) One with every crab playing the mixed strategy [2/3: Small, 1/3: Large]. (iii) One with 1/4 of type Small, 1/4 playing the mixed strategy [1/6: Small, 5/6: Large] and 1/2 playing the mixed strategy [3/4: Small, 1/4: Large]. Of course in the particular example the individuals cannot have mixed strategies, each one is committed to have a body size for life, but the reasoning holds for other games with strategies that do not exclude each other such as in the Stag-Hunt game. We should mention here, that some games such as Hawk-Dove do not have a pure ESS, but they have a mixed ESS. Other games do not have either.
§.§ Previous Work
Searching for the exact complexity of deciding if a bimatrix game possesses an ESS, Etessami and Lochbihler <cit.> invent a nice reduction from the complement of the clique problem to a specific game with an appointed ESS, showing that the ess problem is coNP-hard. They also accomplish a reduction from the sat problem to ess, thus proving that ess is NP-hard too. This makes impossible for the ess to be NP-complete, unless NP=coNP. Furthermore, they provide a proof for the general ess being contained in Σ_2^P, the second level of the polynomial-time hierarchy, leaving open the question of what is the complexity class in which the problem is complete.
A further improvement of those results was made by Nisan <cit.>, showing that, given a payoff matrix, the existence of a mixed ESS is coDP-hard. DP is the complexity class, introduced by Papadimitriou and Yannakakis <cit.>, consisting of all languages L where L = L_1∩ L_2 and L_1 is in NP and L_2 is in coNP. Therefore, coDP is the complexity class consisting of all the complement languages of L, denoted by L̅, where L̅ = L̅_̅1̅∪L̅_̅2̅ and L̅_̅1̅ is in coNP and L̅_̅2̅ is in NP. Clearly, NP ⊆ coDP , coNP ⊆ coDP and coDP ⊆ Σ_2^P. The hardness result is due to a relatively simple reduction from the coDP-complete problem co-exact-clique(for the definition see <cit.>), to ess. A notable consequence of both <cit.> and <cit.> is that the problem of recognizing a mixed ESS, once given along with the payoff matrix, is coNP-complete. However, the question of the exact complexity of ESS existence, given the payoff matrix, remained open. A few years later, Conitzer finally settles this question in <cit.>, showing that ess is actually Σ_2^P-complete.
On the contrary, Hart et al. <cit.> showed that if the symmetric bimatrix game defined by a n × n payoff matrix with elements independently randomly chosen according to a distribution F with exponential and faster decreasing tail, such as exponential, normal or uniform, then the probability of having an ESS with just 2 pure strategies in the support tends to 1 as n tends to infinity. In view of this result, and since the basic reduction of <cit.> used only 3 payoff values, it is interesting to consider whether ESS existence remains hard for arbitrary payoffs in some intervals.
§.§ Our Results
In the reduction of Etessami and Lochbihler that proves coNP-hardness of ess the values of the payoffs used, are 0, k-1/k and 1, for k ∈ℕ. A natural question is if the hardness results hold when we arbitrarily perturb the payoff values within respective intervals (in the spirit of smoothed analysis <cit.>). In our work we extend the aforementioned reduction and show that the specific reduction remains valid even after significant changes of the payoff values.
We can easily prove that the evolutionarily stable strategies of a symmetric bimatrix game remain the exact same if we add, subtract or multiply (or do all of them) with a positive value its payoff matrix. However, that kind of value modification forces the entries of the payoff matrix to change in an entirely correlated manner, hence it does not provide an answer to our question. In this work, we prove that if we have partitions of entries of the payoff matrix with the same value for each partition, independent arbitrary perturbations of those values within certain intervals do not affect the validity of our reduction. In other words, we prove that determining ESS existence remains hard even if we perturb the payoff values associated with the reduction. En route we give a definition of “reduction robustness under arbitrary perturbations” and show how the reduction under examination adheres to this definition.
In contrast, <cit.> show that if the payoffs of a symmetric game are random and independently chosen from the same distribution F with “exponential or faster decreasing tail” (e.g. exponential, normal or uniform), then an ESS (with support of size 2) exists with probability that tends to 1 when n tends to infinity.
One could superficially get a non-tight version of our result by saying that (under supposed continuity assumptions in the ESS definition) any small perturbation of the payoff values will not destroy the reduction. However, in such a case (a) the continuity assumptions have to be precisely stated and (b) this does not explain why the ESS problem becomes easy when the payoffs are random <cit.>.
In fact, the value of our technique is, firstly, to get as tight as possible ranges of the perturbation that preserve the reduction (and the ESS hardness) without any continuity assumptions, secondly, to indicate the basic difference from random payoff values (which is exactly the notion of partition of payoffs into groups in our definition of robustness, and the allowance of arbitrary perturbation within some interval in each group), and finally, the ranges of the allowed perturbations that we determine are quite tight. For the reduction to be preserved when we independently perturb the values (in each of our partitions arbitrarily), one must show that a system of inequalities has always a feasible solution, and we manage to show this in our final theorem. Our result seems to indicate that existence of an ESS remains hard despite a smoothed analysis <cit.>.
An outline of the paper is as follows: In Section <ref> we define the robust reduction notion and we provide a first extension of the aforementioned reduction by <cit.>. In Section <ref> we provide another extended reduction, based on the one from<cit.>, that is essentially modified in order to be robust. In Section <ref> we give our main result and Section <ref> refers to further work and conclusions.
§.§ Definitions and Notation
§.§.§ Background from game theory.
A finite two-player strategic form game Γ = (S_1, S_2, u_1, u_2) is given by finite sets of pure strategies S_1 and S_2 and utility, or payoff, functions u_1 : S_1× S_2↦ℝ and u_2 : S_1× S_2↦ℝ for the row-player and the column-player, respectively. Such a game is called symmetric if S_1 = S_2 =: S and u_1(i,j) = u_2(j,i) for all i,j ∈ S.
In what follows, we are only concerned with finite symmetric two-player strategic form games, so we write (S, u_1) as shorthand for (S, S, u_1, u_2), with u_2(j,i) = u_1(i,j) for all i,j ∈ S. For simplicity assume S = 1,...,n, i.e., pure strategies are identified with integers i, 1 ≤ i ≤ n. The row-player's payoff matrix A_Γ = (a_i,j) of Γ = (S, u_1) is given by a_i,j = u_1(i,j) for i,j ∈ S, so B_Γ=A_Γ^T is the payoff matrix of the column-player. Note that A_Γ is not necessarily symmetric, even if Γ is a symmetric game.
A mixed strategy s = (s(1),...,s(n))^T for Γ = (S, u_1) is a vector that defines a probability distribution on s and, in the sequel, we will denote by s(i) the probability assigned by strategy s on the pure strategy i ∈ S. Thus, s ∈ X, where X = {s ∈ℝ_≥ 0^n : ∑_i=1^ns(i) = 1 } denotes the set of mixed strategies in Γ, with ℝ_≥ 0^n denoting the set of non-negative real number vectors (x_1,x_2,..,x_n). s is called pure iff s(i) = 1 for some i ∈ S. In that case we identify s with i. For brevity, we generally use “strategy” to refer to a mixed strategy s, and indicate otherwise when the strategy is pure. In our notation, we alternatively view a mixed strategy s as either a vector (s_1,...,s_n)^T, or as a function s : S ↦ℝ, depending on which is more convenient in the context.
The expected payoff function, U_k : X × X ↦ℝ for player k ∈1,2 is given by U_k(s,t) = ∑_i,j ∈ Ss(i)t(j)u_k(i,j), for all s,t ∈ X. Note that U_1(s,t) = s^TA_Γt and U_2(s,t) = s^TA_Γ^Tt. Let s be a strategy for Γ = (S, u_1). A strategy t ∈ X is a best response to s if U_1(t,s) = max_t' ∈ XU_1(t',s). The support supp(s) of s is the set { i ∈ S : s(i) > 0 } of pure strategies which are played with non-zero probability. The extended support ext-supp(s) of s is the set { i ∈ S :U_1(i,s) = max_x ∈ XU_1(x,s) } of all pure best responses to s.
A pair of strategies (s,t) is a Nash equilibrium (NE) for Γ if s is a best response to t and t is a best response to s. Note that (s,t) is a NE if and only if supp(s)⊆ ext-supp(t) and supp(t)⊆ ext-supp(s). A NE (s,t) is symmetric if s = t.
A strategy profile (s,s) is a symmetric NE for the symmetric bimatrix game Γ = (S, u_1) if s^TA_Γs ≥ t^TA_Γs for every t ∈ X.
A definition of ESS equivalent to that presented in Subsection 1.1 is:
A (mixed) strategy s ∈ X is an evolutionarily stable strategy (ESS) of a two-player symmetric game Γ if:
* (s,s) is a symmetric NE of Γ, and
* if t ∈ X is any best response to s and t ≠ s, then U_1(s,t) > U_1(t,t).
Due to <cit.>, we know that every symmetric game has a symmetric Nash equilibrium. The same does not hold for evolutionarily stable strategies (for example “rock-paper-scissors” does not have any pure or mixed ESS).
Given a symmetric two-player normal-form game Γ, we are asked whether there exists an evolutionarily stable strategy of Γ.
§.§.§ Background from graph theory.
An undirected graph G is an ordered pair (V,E) consisting of a set V of vertices and a set E, disjoint from V, of edges, together with an incidence function ψ_G that associates with each edge of G an unordered pair of distinct vertices of G. If e is an edge and u and υ are vertices such that ψ_G(e) = {u, υ}, then e is said to join u and υ, and the vertices u and υ are called the ends of e. We denote the numbers of vertices and edges in G by υ(G) and e(G); these two basic parameters are called the order and size of G, respectively.
The adjacency matrix of the above undirected graph G is the n × n matrix A_G := (a_u υ), where a_u υ is the number of edges joining vertices u and υ and n = υ(G).
A clique of an undirected graph G is a complete subgraph of G, i.e. one whose vertices are joined with each other by edges.
Given an undirected graph G and a number k, we are asked whether there is a clique of size k.
As mentioned earlier, in what follows, ℝ_≥ 0^n denotes the set of non-negative real number vectors (x_1,x_2,..,x_n) and n=|V|.
Let G = (V,E) be an undirected graph with maximum clique size d. Let Δ_1 = { x ∈ℝ_≥ 0^n : ∑_i=1^n x_i = 1 }. Then max_x ∈Δ_1 x^TA_Gx = d-1/d.
Let G = (V,E) be an undirected graph with maximum clique size d. Let A_G^τ,ρ be a modified adjacency matrix of graph G where its entries with value 0 are replaced by τ∈ℝ and its entries with value 1 are replaced by ρ∈ℝ. Let Δ_1 = { x ∈ℝ_≥ 0^n : ∑_i=1^n x_i = 1 }. Then max_x ∈Δ_1 x^TA_G^τ,ρx = τ + (ρ-τ)d-1/d.
x^TA_G^τ,ρx = x^T[τ·1 + (ρ-τ) · A_G]x , where 1 is the n × n matrix with value 1
in every entry.
= τ + (ρ-τ) · x^TA_Gx , and by Theorem <ref> the result follows.
Let G = (V,E) be an undirected graph with maximum clique size d and let l ∈ℝ_≥ 0. Let Δ_l = { x ∈ℝ_≥ 0^n : ∑_i=1^n x_i = l }. Then max_x ∈Δ_lx^TA_Gx = d-1/dl^2.
§ ROBUST REDUCTIONS
Let v ∈ℝ. An (open) interval I(v)=[a,b] (I(v)=(a,b)) with a<b where a ≤ v ≤ b, is called a neighbourhood of v of range |b-a|.
We are given a valid reduction of a problem to a strategic game that involves a real matrix A of payoffs as entries a_ij. A consists of m partitions, with each partition's entries having the same value v(t), for t ∈{1,2,...m}. Let I(v(t)) ≠∅ be a neighbourhood of v(t) and w(t) ∈ I(v(t)) be an arbitrary value in that neighbourhood. The reduction is called robust under arbitrary perturbations of values if it is valid for all the possible matrices W with entries w(t).
§.§ A First Extension of the Reduction from the Complement of the clique Problem to ess
In the sequel we extend the idea of K. Etessami and A. Lochbihler <cit.> by giving sufficient conditions in order for the reduction to hold. We replace the zeros and ones of their reduction with τ>0 and ρ<1 respectively.
Given an undirected graph G=(V, E) we construct the following game Γ_k,τ,ρ(G) = (S, u_1) for λ(k) = k-1/k, where k ∈ℕ, and suitable 0<τ<ρ<1 to be determined later. Note that from now on we will only consider rational τ and ρ so that every payoff value of the game is rational.
S=V ∪{a,b,c} are the strategies for the players where a,b,c ∉ V.
n= |V| is the number of nodes.
* u_1(i,j)=ρ for all i,j ∈ V with (i,j) ∈ E .
* u_1(i,j)=τ for all i,j ∈ V with (i,j) ∉ E .
* u_1(z,a)=ρ for all z ∈ S-{b,c} .
* u_1(a,i)=λ(k) for all i ∈ V .
* u_1(y,i)=ρ for all y ∈{b,c} and i ∈ V .
* u_1(y,a)=τ for all y ∈{b,c} .
* u_1(z,y)=τ for all z ∈ S and y ∈{b,c} .
Here is an example of the payoff matrix of the strategic game derived from a graph with 3 nodes.
Then the payoff matrix of the row-player is:
66
a b c 1 2 3
a ρ τ τ (k-1)/k (k-1)/k (k-1)/k
b τ τ τ ρ ρ ρ
c τ τ τ ρ ρ ρ
1 ρ τ τ τ ρ τ
2 ρ τ τ ρ τ ρ
3 ρ τ τ τ ρ τ
The transpose of it is the payoff matrix of the column-player. In the sequel we shall use two corollaries of the Motzkin and Strauss theorem, namely, Corollary <ref> and Corollary <ref>.
Let G=(V, E) be an undirected graph. The game Γ_k,τ,ρ(G) with λ(k) = k-1/k and
* ρ∈(1 - 4/(n+1)^2, 1 - 1/(n+1)^2] and τ∈[(1 - ρ)(n-1), ρ - (1 - √(1-ρ))^2)
or
* ρ∈(1 - 1/(n+1)^2, 1 ) and τ∈[(1 - ρ)(n-1), (1 - ρ)(n-1) + 1/n+1)
has an ESS if and only if G has no clique of size k.
Let G=(V,E) be an undirected graph with maximum clique size d. We consider the game Γ_k,τ,ρ(G) above. Suppose s is an ESS of Γ_k,τ,ρ(G).
For the reduction we will prove three claims by using contradiction, that taken together show that the only possible ESS s of Γ_k,τ,ρ(G) is the pure strategy a. Here we should note that these three claims hold not only for the aforementioned intervals of τ and ρ, but for any τ,ρ∈ℝ for which τ < ρ.
[1]
The support of any possible ESS of Γ_k,τ,ρ(G) does not contain b or c (supp(s) ∩{b,c} = ∅).
Suppose supp(s) ∩{b,c}≠∅ .
Let t ≠ s be a strategy with t(i)=s(i) for i ∈ V, t(y)=s(b)+s(c) and t(y')=0 where y,y' ∈{b,c} such that y ≠ y' and s(y)=min{s(b),s(c)}. Since u_1(b,z)=u_1(c,z) for all z ∈ S,
U_1(t,s) = ∑_i ∈ Vt(i)U_1(i,s) + (t(b) + t(c))U_1(b,s) + t(a)U_1(a,s) ,
U_1(s,s) = ∑_i ∈ Vs(i)U_1(i,s) + (s(b) + s(c))U_1(b,s) + s(a)U_1(a,s) ,
which yields U_1(t,s) = U_1(s,s) and so t is a best response to s. Also,
U_1(s,t) = ∑_i ∈ Vs(i)U_1(i,t) + (s(b) + s(c))U_1(b,t) + s(a)U_1(a,t) ,
U_1(t,t) = ∑_i ∈ Vt(i)U_1(i,t) + (t(b) + t(c))U_1(b,t) + t(a)U_1(a,t) ,
which yields U_1(s,t) = U_1(t,t). But this is a contradiction since it should be U_1(s,t) > U_1(t,t) as s is an ESS.
[2]
The support of any possible ESS of Γ_k,τ,ρ(G) contains a (supp(s) ⊈ V).
Suppose supp(s) ⊆ V .
Then, we denote by A_G the adjacency matrix of the graph G.
U_1(s,s) = ∑_i,j ∈ Vs(i)s(j)u_1(i,j) = x^TA_G,τ,ρx
≤τ + (ρ - τ)d-1/d (by Corollary <ref>)
< ρ = U_1(b,s) for every ρ > τ .
But this is a contradiction since s is an ESS and therefore a NE. From Claim (1) and Claim (2), it follows that a ∈ supp(s), i.e. s(a)>0 .
[3]
s(a)=1 .
Suppose s(a)<1 .
Since (s,s) is a NE, a is a best response to s and a ≠ s. Then U_1(s,a) = ∑_z ∈ supp(s)s(z)u_1(s,a)=ρ=U_1(a,a). But this is also a contradiction since it should be U_1(s,a) > U_1(a,a) as s is an ESS. Therefore, the only possible ESS of Γ_k,τ,ρ(G) is the pure strategy a.
Now we show the following lemma, which concludes also the proof of Theorem <ref>.
The game Γ_k,τ,ρ(G) with the requirements of Theorem <ref> has an ESS (strategy a) if and only if there is no clique of size k in graph G.
We consider two cases for k:
§.§.§ Case 1: d<k
. Let t ≠ a be a best response to a. Then supp(t) ⊆ V ∪{a} .
Let r= ∑_i ∈ Vt(i). So r>0(t ≠ a) and t(a)=1-r . Combining Corollary <ref> and <ref> we get,
U_1(t,t) - U_1(a,t) = ∑_i,j ∈ Vt(i)t(j)u_1(i,j) + r· t(a) ·ρ +
+ t(a) · r ·k-1/k + t(a)^2 ·ρ - [r ·k-1/k + t(a) ·ρ]
≤ [ τ + (ρ - τ) d-1/d]r^2 + r(1-r) ·ρ +
+ (1-r)r k-1/k + (1-r)^2 ·ρ - r k-1/k - (1-r) ·ρ
= [ τ + (ρ - τ) d-1/d]r^2 - k-1/k r^2
= r^2/d[ τ + ρ (d-1) - d k-1/k]
= r^2/dE , where E=τ + ρ (d-1) - d k-1/k .
If we can show that E<0 then strategy a is an ESS. We now show why E<0:
Let us define the following function,
f(k,d,ρ) = d k-1/k - ρ (d-1) , with the restrictions: k ≥ d+1 , 1 ≤ d ≤ n
and ρ∈ (0,1) .
Then we define the function g(d,ρ):
g(d,ρ) = min_kf(k,d,r) = d d/d+1 - ρ (d-1) = (1 - ρ)(d-1) + 1/d+1 .
By examining the first and second partial derivative with respect to variable d, we find the minimum of function g(d,ρ):
h(ρ) = min_dg(d,ρ) = ρ - (1 - √(1-ρ))^2 , for d^* = 1/√(1 - ρ) - 1 .
Now there are two subcases. The maximum clique size may be impossible to reach the value of d^*, or it could reach it, depending on the size of n=|V| .
Subcase i) n < 1/√(1 - ρ) - 1 or equivalently: ρ > 1 - 1/(n+1)^2 .
From the partial derivatives of function g(d,ρ) with respect to variable d we know that it is a strictly decreasing function for d<d^*.
And given that d ≤ n, from (1) we get:
h(ρ) = (1 - ρ)(n-1) + 1/n+1 , for 1 - 1/(n+1)^2 < ρ < 1 .
Subcase ii) n ≥1/√(1 - ρ) - 1 or equivalently: ρ≤ 1 - 1/(n+1)^2 .
By examining the first and second partial derivative with respect to variable ρ, we find the plot of function h(ρ) to be:
As we can see, the maximum of h(ρ) is 1/2 and it is achieved when ρ = 3/4 .
Interval a) 3/4 < ρ≤ 1 - 1/(n+1)^2 .
The monotonicity of h(ρ) in this interval implies that its minimum is achieved for ρ^*=1 - 1/(n+1)^2 . Thus if we want a minimum independent of ρ, from (2) we get:
min_ρh(ρ) = 1 - 1/(n+1)^2 - (1 - √(1 - (1 - 1/(n+1)^2)))^2 = 2n/(n+1)^2 .
Interval b) 0 < ρ≤3/4 .
The monotonicity of h(ρ) in this interval implies that there is no minimum point, but when ρ gets arbitrarily close to zero then h(ρ) goes arbitrarily close to zero as well, i.e. lim_ρ→ 0^+h(ρ) = 0 .
To sum up:
τ^* = min_k,df(k,d,ρ) =
ρ - (1 - √(1-ρ))^2 , if 0 < ρ≤ 1 - 1/(n+1)^2 , from (2)
(1 - ρ)(n-1) + 1/n+1 , if 1 - 1/(n+1)^2 < ρ < 1 , from (3)
or if we want the minima to be independent of ρ when possible:
τ^* = min_k,d,ρf(k,d,ρ) =
ρ - (1 - √(1-ρ))^2 , if 0 < ρ≤3/4
2n/(n+1)^2 , if 3/4 < ρ≤ 1 - 1/(n+1)^2 , from (4)
1/n+1 , if 1 - 1/(n+1)^2 < ρ < 1 , from (3).
Therefore, depending on the interval that ρ belongs to, we can demand τ to be strictly less than τ^* , making U_1(t,t) - U_1(a,t) negative. We conclude that when d<k then strategy a is an ESS.
§.§.§ Case 2: d ≥ k
. Let C ⊆ V be a clique of G of size k. Then t with t(i)=1/k for i ∈ C and t(j)=0 for j ∈ S ∖ C is a best response to a and t ≠ a, and
U_1(t,t) = ∑_i,j ∈ Ct(i)t(j)u_1(i,j)= 1/k^2· (k-1)k ·ρ + 1/k^2k ·τ = (k-1) ρ + τ/k ,
U_1(a,t) = k-1/k .
Then,
U_1(t,t) - U_1(a,t) = 1/k[τ - (1 - ρ)(k-1) ]
= 1/kE' , where E'=τ - (1-ρ)(k-1) .
If E' ≥ 0 then a cannot be an ESS. We explain why E' ≥ 0:
Let's define the following function:
y(k,ρ) =(1 - ρ)(k-1) , with the restrictions: k ≤ d and ρ∈ (0,1) .
Then we define the function z(d,ρ):
z(d,ρ) = max_ky(k,ρ) = (1 - ρ)(d-1) ,
so,
τ^** = max_dz(d,ρ) = (1 - ρ)(n-1) .
Now, given that τ needs to be at least τ^** but strictly less than τ^* the following should hold:
(1 - ρ)(n-1) < ρ - (1 - √(1-ρ))^2 , or equivalently, ρ > 1 - 4/(n+1)^2 .
So we conclude that when d ≥ k then strategy a is not an ESS. This completes the proof of Lemma <ref> and Theorem <ref>.
§.§ An Interesting Consequence
An interesting consequence of the analysis above, is the fact that if we could possess an algorithm, called DecESS, which decides in polynomial time whether a game Γ_k,τ,ρ(G) has an ESS, then the maximum clique size of graph G can also be found in polynomial time using the following binary search algorithm.
In this algorithm we supposed there is an algorithm, called DecESS, that uses as input the game: Γ_mid,τ',ρ'(G), where the mid value depends on the current min and max values of the algorithm, and τ' and ρ' are picked in the intervals:
* ρ' ∈(1 - 4/(max+1)^2, 1 - 1/(max+1)^2] and τ' ∈[(1 - ρ)(max-1), ρ - (1 - √(1-ρ))^2)
or
* ρ' ∈(1 - 1/(max+1)^2, 1 ) and τ' ∈[(1 - ρ)(max-1), (1 - ρ)(max-1) + 1/max+1) ,
for the current value of max in each loop of the algorithm. The output of DecESS is: ”yes”, if there exists an ESS in Γ_mid,τ',ρ'(G) and ”no”, otherwise. So we construct a new game Γ_mid,τ,ρ(G) every time min or max (and therefore mid) are changed. Note that while the binary search runs, the maximum possible clique size of the graph (max) changes, so, we can modify the intervals of our τ' and ρ' as if we had a new graph with |V|=max instead of n.
As the clique problem has been proved to be NP-complete, to find the maximum clique size of a given graph (max-clique problem) is NP-hard, thus, possession of the above mentioned algorithm would yield that P=NP.
To determine the time complexity of the Binary clique search algorithm let's suppose that the steps which DecESS needs are R(n) ∈ O(n^w) for some constant w. From the algorithm we can derive the recurrent relation for the steps needed:
T(m) = 4 + R(n) + T(⌈m2⌉)
= 4 + R(n) + 4 + R(n) + T(⌈m4⌉)
= ...
(the steps of DecESS are not dependent on the size m of the search list)
and in general
T(m) = (4 + R(n))i + T(⌈m/2^i⌉) .
The base case is:
T(⌈m/2^i⌉) = T(1)
⇒⌈m/2^i⌉ = 1 ⇒ i = ⌈log_2 m ⌉ ,
so, the latter equation is:
T(m) = (4 + R(n)) ⌈log_2 m ⌉ + T(1) .
Our initial condition is T(1)=1 .
Also, m=n. (We wrote m instead of n above because the steps R(n) of DecESS do not depend on the size m of the search list of each Binary clique search's loop, they only depend on the number of G's vertices n=|V|.) So, if we count the steps for the initialization of the variables along with the return command, the steps needed by Binary clique search are:
T(n) = (4 + R(n)) ⌈log_2 n ⌉ + 4 ,
which yields:
T(n) ∈ O(n^wlog_2 n) .
To sum up, if we have a polynomial time algorithm DecESS which decides if the game Γ_k,τ,ρ(G) has an ESS, then the max-clique problem is solvable in polynomial time, as we can always reduce an undirected graph G=(V, E) to ⌈log_2 n ⌉ number of Γ_k,τ,ρ(G) games, each of them in polynomial time and eventually find the maximum clique size of G in polynomial time using Binary clique search.
All in all, supposing the reduction from the graph to the game requires O(n^r) time for some constant r (as shown by <cit.>), then, the assumption of a DecESS in P yields that the max-clique problem requires O(n^r+wlog_2 n) time and P=NP.
The ess problem with payoff values in the domains given in Theorem <ref> is coNP-hard.
§ EXTENDING THE REDUCTION WITH RESPECT TO Λ(K)
We now prove a generalization of the latter reduction for λ(k) = 1- 1/k^x, with x ≥ 3:
Let G=(V, E) be an undirected graph. The game Γ_k,τ,ρ^x(G) with λ(k) = 1- 1/k^x, for x ≥ 3 and
* ρ∈(1 + n^x-1-2^x/2^xn^x-1(n-1), 1 + (n+1)^x - n2^x/2^x(n+1)^x(n-1)] and
τ∈[(1 - ρ)(n-1) + 1 - 1/n^x-1, 1 - 1/2^x)
or
* ρ∈(1 + (n+1)^x - n2^x/2^x(n+1)^x(n-1), + ∞) and
τ∈[(1 - ρ)(n-1) + 1 - 1/n^x-1, (1-ρ)(n-1) + 1 - n/(n+1)^x)
has an ESS if and only if G has no clique of size k.
Let G=(V,E) be an undirected graph with maximum clique size d. We consider the game Γ_k,τ,ρ(G) defined in Subsection <ref>, with the only difference that now, we substitute payoffs of value k-1/k with new payoffs k^x-1/k^x, meaning we make the change k ← k^x. Suppose s is an ESS of Γ_k,τ,ρ^x(G).
In this case, the same analysis as in Subsection <ref> is similarly applied up to the point where we prove that the only possible ESS of Γ_k,τ,ρ^x(G) is the pure strategy a.
Now we proceed to show the following lemma, which concludes also the proof of Theorem <ref>.
The game Γ_k,τ,ρ^x(G) with the requirements of Theorem <ref> has an ESS (strategy a) if and only if there is no clique of size k in graph G.
We consider again two cases for k:
§.§.§ Case 1: d<k
. Let t ≠ a be a best response to a. Then supp(t) ⊆ V ∪{a}.
Let r= ∑_i ∈ Vt(i). So r>0, (t ≠ a) and t(a)=1-r. Combining Corollary <ref> and <ref> we get,
U_1(t,t) - U_1(a,t) = ∑_i,j ∈ Vt(i)t(j)u_1(i,j) + r· t(a) ·ρ +
+ t(a) · r ·k^x-1/k^x + t(a)^2 ·ρ - [r ·k^x-1/k^x + t(a) ·ρ]
≤ [ τ + (ρ - τ) d-1/d]r^2 + r(1-r) ·ρ +
+ (1-r)r k^x-1/k^x + (1-r)^2 ·ρ - r k^x-1/k^x - (1-r) ·ρ
= [ τ + (ρ - τ) d-1/d]r^2 - k^x-1/k^x r^2
= r^2/d[ τ - (1-ρ) (d-1) - (1- d/k^x) ]
= r^2/dE , where E=τ - (1-ρ) (d-1) - (1- d/k^x) .
If we can show that E<0 then strategy a is an ESS. We show why E<0:
Let's define the following function:
f(k,d,ρ) = (1-ρ)(d-1) + 1- d/k^x , with the restrictions: k ≥ d+1 , 1 ≤ d ≤ n , x ≥ 3 .
Then we define the function g(d,ρ):
g(d,ρ) = min_kf(k,d,r) = (1-ρ)(d-1) + 1- d/(d+1)^x .
Now, the first two partial derivatives of g(d,ρ) with respect to variable d, are:
∂ g(d,ρ)/∂ d = (1 - ρ) + (x-1)d-1/(d+1)^x+1
∂^2 g(d,ρ)/∂ d^2 = -x[(x-1)d-2]/(d+1)^x+2 , which is non-positive for d ≥ 1, x ≥ 3 .
This means that function g has its minimum either for d=1 or d=n:
g(1,ρ) = 1 - 1/2^x
g(n,ρ) = (1-ρ)(n-1) + 1 - n/(n+1)^x
If the minimum is g(1,ρ):
g(1,ρ) ≤ g(n,ρ) , or equivalently, ρ≤ 1 + (n+1)^x - n2^x/2^x(n+1)^x(n-1) .
Then,
h(ρ) = min_dg(d,ρ) = 1 - 1/2^x .
If the minimum is g(n,ρ):
g(n,ρ) < g(1,ρ) , or equivalently, ρ > 1 + (n+1)^x - n2^x/2^x(n+1)^x(n-1) .
Then,
h(ρ) = min_dg(d,ρ) = (1-ρ)(n-1) + 1 - n/(n+1)^x .
So, following the notation we used in Subsection <ref>:
τ^* = min_k,df(k,d,ρ) =
1 - 1/2^x , if ρ≤ 1 + (n+1)^x - n2^x/2^x(n+1)^x(n-1)
(1-ρ)(n-1) + 1 - n/(n+1)^x , if ρ > 1 + (n+1)^x - n2^x/2^x(n+1)^x(n-1)
Therefore, we can demand τ to be strictly less than τ^*, making U_1(t,t) - U_1(a,t) negative. We conclude that when d<k then strategy a is an ESS.
§.§.§ Case 2: d ≥ k
. Let C ⊆ V be a clique of G of size k. Then t with t(i)=1/k for i ∈ C and t(j)=0 for j ∈ S ∖ C is a best response to a and t ≠ a, and
U_1(t,t) = ∑_i,j ∈ Ct(i)t(j)u_1(i,j)= 1/k^2· (k-1)k ·ρ + 1/k^2k ·τ = (k-1) ρ + τ/k ,
U_1(a,t) = k^x-1/k^x .
Then,
U_1(t,t) - U_1(a,t) = 1/k[τ - (1 - ρ)(k-1) - (1 - 1/k^x-1) ]
= 1/kE' , where E'=τ - (1 - ρ)(k-1) - (1 - 1/k^x-1) .
If E' ≥ 0 then a cannot be an ESS. We explain why E' ≥ 0:
Let's define the following function:
y(k,ρ) = (1 - ρ)(k-1) + 1 - 1/k^x-1 , with the restrictions: k ≤ d .
Then we define the function z(d,ρ):
z(d,ρ) = max_ky(k,ρ) = (1 - ρ)(d-1) + 1 - 1/d^x-1 ,
so,
τ^** = max_dz(d,ρ) = (1 - ρ)(n-1) + 1 - 1/n^x-1 .
Now, given that τ needs to be at least τ^** but strictly less than τ^* the following should hold:
(1 - ρ)(n-1) + 1 - 1/n^x-1 < 1 - 1/2^x , or equivalently, ρ > 1 + n^x-1-2^x/2^xn^x-1(n-1) .
So we conclude that when d ≥ k then strategy a is not an ESS. This completes the proof of Lemma <ref> and Theorem <ref>.
The ess problem with payoff values in the domains given in Theorem <ref> is coNP-hard.
§ OUR MAIN RESULT
Now we can prove our main theorem:
Any reduction as in Theorem <ref> for x=x_0≥ 3 from the complement of the clique problem to the ess problem is robust under arbitrary perturbations of values in the intervals:
τ∈[1 - 1/2^x_0 - D, 1 - 1/2^x_0 - D + B ),
ρ∈(1 + (n+1)^x_0 - n2^x_0/2^x_0(n+1)^x_0(n-1), 1 + (n+1)^x_0 - n2^x_0/2^x_0(n+1)^x_0(n-1) + A ),
λ∈[1-1/k^x_0, 1-1/k^x_1],
where x_1∈(x_0 , x_0log_n(n+1)), C = (n+1)^x_0-n^x_1/n^x_1-1(n+1)^x_0(n-1), D = C(n-1), any A ∈ (0,C) and B = (C-A)(n-1).
We denote three partitions of the game's payoff matrix U: U_τ, U_ρ, U_λ disjoint sets, with U_τ∪ U_ρ∪ U_λ = U and values τ,ρ,λ of their entries respectively. Each set's entries have the same value. For every λ∈[ 1-1/k^x_0, 1-1/k^x_1] there is a x=-log_k(1-λ) in the interval [x_0, x_1] such that λ = 1-1/k^x, where x_0 ≥ 3 and x_1 ∈ (x_0, x_0log_n(n+1)). We will show that, for this x, any reduction with the values of τ, ρ in the respective intervals stated in Theorem <ref>, is valid.
In Figure <ref>, we show the validity area of τ depending on ρ with parameter x, due to Theorem <ref>. The thin and thick plots bound the validity area (shaded) for x=x_0 and x=x_1 respectively.
While x increases, the parallel lines of the lower and upper bound of τ move to the right, the horizontal line of the upper bound of τ moves up, and the left acute angle as well as the top obtuse angle of the plot move to the left (by examination of the monotonicity of those bounds with respect to x).
The lower bound of τ for an x=x' > x_0 equals the upper bound of τ for x=x_0, when x'=x_0log_n(n+1). Thus, for all 𝐱 ∈ (𝐱_0, 𝐱_0𝐥𝐨𝐠_𝐧(𝐧+1)) there is a non-empty intersection between the validity areas. We have picked an x=x_1 ∈ (x_0, x_0log_n(n+1)).
In Figure <ref>, we show a zoom-in of the intersection of the validity areas of Figure <ref>. Let the intersection of lines: 1-1/2^x_0 , (1-ρ)(n-1) + 1 - 1/n^x_1-1 be at point ρ=ρ_C.
Then,
(1-ρ_C)(n-1) + 1 - 1/n^x_1-1 = 1-1/2^x_0
or equivalently, ρ_C = 1 - 1/2^x_0(n-1) - 1/n^x_1-1(n-1) .
So,
C = 1 + (n+1)^x_0 - n2^x_0/2^x_0(n+1)^x_0(n-1) - ρ_C , or equivalently, C = (n+1)^x_0-n^x_1/n^x_1-1(n+1)^x_0(n-1) .
From the upper bound of τ as a function of ρ we can see that tanφ = n-1. Thus,
D = C tanφ, or equivalently, D = (n+1)^x_0-n^x_1/n^x_1-1(n+1)^x_0 .
Now we can pick any A ∈ (0,C). So, it must be
B = (C-A) tanφ, or equivalently, B = (n-1)(C-A).
For the rectangle with sides A,B shown in Figure <ref>, the reduction is valid for all x ∈ [x_0,x_1], thus for all λ∈[1-1/k^x_0, 1-1/k^x_1]. This completes the proof of Theorem <ref>.
§ CONCLUSIONS AND FURTHER WORK
In this work we introduce the notion of reduction robustness under arbitrary perturbations within an interval and we provide a generalized reduction based on the one in <cit.> that proves coNP-hardness of ess. We demonstrate that our generalised reduction is robust, thus showing that the hardness of the problem is preserved even after certain arbitrary perturbations of the payoff values of the derived game. As a future work we would like to examine the robustness of reductions for other hard problems, especially game-theoretic ones.
abbrv
|
http://arxiv.org/abs/1701.07558v4 | 20170126030329 | Time evolution of the Kondo resonance in response to a quench | [
"H. T. M. Nghiem",
"T. A. Costi"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
Peter Grünberg Institut and Institute for Advanced Simulation,
Research Centre Jülich, 52425 Jülich, Germany
Advanced Institute for Science and Technology, Hanoi University of
Science and Technology, 10000 Hanoi, Vietnam
Peter Grünberg Institut and Institute for Advanced Simulation,
Research Centre Jülich, 52425 Jülich, Germany
We investigate the time evolution of the Kondo resonance in response to a quench by applying the
time-dependent numerical renormalization group (TDNRG) approach to the Anderson impurity model in the strong correlation limit. For this purpose, we derive within TDNRG a numerically tractable expression for the retarded two-time nonequilibrium Green function G(t+t',t), and its associated time-dependent spectral function, A(ω,t), for times t both before and after the quench. Quenches from both mixed valence and Kondo correlated initial states to Kondo correlated final states are considered. For both cases, we find that the Kondo resonance in the zero temperature spectral function, a preformed version of which is evident at very short times t→ 0^+, only fully develops at very long times t≳ 1/T_ K, where T_ K is the Kondo temperature of the final state. In contrast, the final state satellite peaks develop on a fast time scale 1/Γ during the time interval -1/Γ≲ t ≲ +1/Γ, where Γ is the hybridization strength. Initial and final state spectral functions are recovered in the limits t→ -∞ and t→ +∞, respectively.
Our formulation of two-time nonequilibrium Green functions within TDNRG provides a first step towards using this method as an impurity solver within nonequilibrium dynamical mean field theory.
75.20.Hr, 71.27.+a, 72.15.Qm, 73.63.Kv
Time evolution of the Kondo resonance in response to a quench
T. A. Costi
December 30, 2023
=============================================================
Introduction.—
The nonequilibrium properties of strongly correlated quantum impurity models continue to pose a major theoretical challenge. This contrasts with their equilibrium properties, which are largely well understood <cit.>, or can be investigated within a number of highly accurate methods, such as the numerical renormalization group method (NRG) <cit.>, the continuous time quantum Monte Carlo (CTQMC) approach <cit.>, the density matrix renormalization group <cit.>, or the Bethe ansatz method <cit.>. Quantum impurity models far from equilibrium are of direct relevance to several fields of research, including charge transfer effects
in low-energy ion-surface scattering <cit.>,
transient and steady state effects in molecular and semiconductor quantum dots <cit.>, and also in the context of dynamical mean field theory (DMFT) of strongly correlated lattice models <cit.>, as generalized to nonequilibrium <cit.>. In the latter, further progress hinges on
an accurate non-perturbative solution for the nonequilibrium Green functions of an effective quantum impurity model. Such a solution, beyond allowing time-resolved spectroscopies of correlated lattice systems within DMFT to be addressed <cit.>, would
also be useful in understanding time-resolved scanning tunnelling microscopy of nanoscale systems <cit.> and proposed cold atom realizations of Kondo correlated states <cit.>, which could be probed with real-time radio-frequency spectroscopy <cit.>.
In this Letter, we use the time-dependent numerical renormalization group (TDNRG) approach <cit.> to calculate the retarded two-time Green function, G(t_1=t+t',t_2=t), and associated spectral function,
A(ω,t), of the Anderson impurity model in response to a quench at time t=0, and apply this to investigate in detail
the time evolution of the Kondo resonance.
This topic has been addressed before within several approaches, including the non-crossing approximation <cit.>, conserving approximations <cit.> and within CTQMC for quantum dots out of equilibrium <cit.>.
Related work on the temporal evolution of the spin-spin correlation function in the Kondo model and thermalization in the Anderson impurity model following initial state preparations has also been carried out <cit.>. Formulations of the time-dependent spectral function within TDNRG are also available <cit.>, but only for positive times.
Here, we derive expressions for the two-time Green function and spectral function A(ω,t) which are numerically tractable at arbitrary times, including negative times. The main advantages of the TDNRG over other approaches for calculating time-dependent spectral functions
is that it can access arbitrary long times (t→±∞) and arbitrary low temperatures and frequencies, is non-perturbative
and calculates spectral functions directly on the real frequency axis. It is therefore well suited for investigating the formation in time of the exponentially narrow and low temperature Kondo resonance [The use of a discretized Wilson chain within TDNRG results, to a small degree, in incomplete
thermalization at t→∞ in thermodynamic observables <cit.>,
and in spectral functions <cit.>, but it suffices for a consistent nonperturbative picture of the overall time evolution of A(ω,t).]
Model and quenches.—We consider the time-dependent Anderson impurity model,
H = ∑_σε_d(t)n_dσ+U(t)n_d↑n_d↓ + ∑_kσϵ_kc^†_kσc_kσ
+∑_kσ V(c^†_kσd_σ+d^†_σc_kσ),
where ε_d(t)=θ(-t)ε_i+θ(t)ε_f is the energy of the local level,
U(t) =θ(-t)U_i +θ(t) U_f is the local Coulomb interaction, σ labels the spin, n_dσ=d_σ^†d_σ is the number operator for local electrons with spin σ, and ε_k is the kinetic energy of the conduction electrons with constant density of states ρ(ω)=∑_kδ(ω-ε_k)=1/2D with D=1 the half-bandwidth. We take
Γ≡πρ(0)V^2=0.001
throughout and consider two types of quench [referred to subsequently as quench (A) or quench (B)]: (A), from a symmetric Kondo regime with ε_i=-15Γ, U_i=30Γ and a vanishingly small Kondo scale T^i_ K=3× 10^-8 [We use the Bethe ansatz expression
T_ K=√(Γ U/2)e^-π U/8Γ + πΓ/2U valid in the symmetric Kondo limit U/πΓ≫ 1 <cit.>.] to a symmetric Kondo regime with ε_f=-6Γ,U_f=12Γ and a larger Kondo scale T_ K=2.5× 10^-5, and, (B), from a mixed valence regime with ε_i=-Γ, U_i=8Γ to a symmetric Kondo regime with ε_f=-4Γ, U_f=8Γ and a Kondo scale T_ K=1.0× 10^-4.
Spectral function A(ω,t).—
We obtain the time-dependent spectral function via A(ω,t)=-1/π Im[G(ω+iη,t)], where G(ω+iη,t), with infinitesimal η>0, is the Fourier transform of G(t+t',t)≡ -iθ(t')⟨[d_σ(t+t'), d_σ^†(t)]_+⟩_ρ̂ with respect to the relative time t' and ρ̂ denotes the full density matrix of the initial state <cit.>.
In the notation of Ref. Nghiem2014a, we find for the case of positive times [See Supplementary Material [URL] for derivations and additional results, including Ref. Bulla2001.]
G(ω+iη,t)
=∑_m=m_0^N∑_rsq^∉ KK'K”ρ_sr^i→ f(m)e^-i(E_s^m-E_r^m)t
×(B^m_rqC^m_qs/ω+E^m_r-E^m_q+iη
+C^m_rq B^m_qs/ω+E^m_q-E^m_s+iη),
where B=d_σ, C=d_σ^†, and ρ_sr^i→ f(m)=∑_e_f⟨ sem|ρ̂|rem⟩_f is the full reduced density matrix projected onto the final states <cit.>. A somewhat more complicated expression can be derived for negative times <cit.>. From Eq. (<ref>), we see that the spectral function can be calculated highly efficiently at
all times and frequencies from a knowledge of ρ_sr^i→ f(m), the final state matrix elements,
and excitations
at each shell m.
Our expressions for A(ω,t) in the two time domains t<0 and t>0 recover the initial and final state spectral functions for t→ -∞ and t→ +∞, respectively and satisfy the spectral sum rule ∫_-∞^+∞ dω A(ω,t)=1 exactly <cit.>.
Below, we shall first focus on positive times, where the main time evolution of the Kondo resonance occurs, then on negative to positive times, showing how the high energy final state features in A(ω,t) evolve from their initial state counterparts already at negative times.
Results for positive times.—
Consider quench (A), i.e., switching between symmetric Kondo regimes with T_ K^i≪ T_ K.
Figure <ref>(a) shows the overall time-dependence of the spectral function A(ω>0,t>0)=A(-ω,t>0). Two structures, associated with two energy scales, are visible at all times t>0 : the satellite peak at ω=ε_f+U_f≈ 240 T_ K and a structure on the scale of T_ K around the Fermi level. The former has negligible time-dependence, indicating that the satellite peak in the spectral function has already formed by time t=0 (its evolution at negative times from the initial state satellite peak at ω=ε_i+U_i>ε_f+U_f is discussed below). In contrast to this, the structure around the Fermi level has significant time-dependence at t>0 and evolves into the fully formed final state Kondo resonance only on time scales t≳ 1/T_ K [Figs. <ref>(c) and <ref>(d)] in agreement with Ref. Nordlander1999 for the U=∞ Anderson model.
For tT_ K≫ 1, the height of the Kondo resonance at the Fermi level approaches its unitary value given by the Friedel sum rule πΓ A(ω=0,t→∞)=1 to within 15% [Fig. <ref>(d)]. The small deviation from the expected value is a result
of incomplete thermalization due to the discretized Wilson chain used within TDNRG <cit.>. Consequently,
evaluating A(ω,t→∞) via the self-energy <cit.> does not improve the Friedel sum rule further in this limit <cit.>. In the opposite limit,
t→ -∞, where thermalization is not an issue, we recover the Friedel sum rule to within 3% (discussed below). The use of a discrete Wilson chain is also the
origin of the small substructures at |ω|≲ T_ K in Figs. <ref>(b)-<ref>(d), effects seen in the time evolution of other quantities, such as the
local occupation, and explained in terms of the discrete Wilson chain <cit.>. On shorter time scales, tT_ K≲ 1, states in the region T_ K^i≪ |ω|<T_ K, initially missing [Fig. <ref>(b)], are gradually filled in by a transfer of spectral weight from higher energies [Fig. <ref>(c)] to form the final state Kondo resonance at long times [Fig. <ref>(d)]. The presence of a structure on the final state Kondo scale T_ K at short times t→ 0^+ is understood as follows:
the Fourier transform with respect to t'=t_1-t_2 necessarily convolutes information about the final state at large t_1,t_2 into the spectral function at short-times t <cit.>. Hence, the gross features of the spectral function, even at short times t→ 0^+, are close to those of the final state spectral function A(ω,t→∞), and far from those of the initial state spectral function. Clear signatures of the latter, such as the much narrower initial state Kondo peak, only appear at negative times.
Consider now quench (B), in which the system, is switched from the mixed valence to the symmetric Kondo regime. Figures <ref>(a)-(b) show the overall time-dependence of the spectral function for ω<0 [Figure <ref>(a)]
and ω>0 [Figure <ref>(b)]. As for quench (A), two structures associated with two energy scales are again visible at all times t>0: the satellite peaks at ω=ε_f≈ -40T_ K [Figure <ref>(a)] and ω=ε_f+U≈ +40 T_ K [Figure <ref>(b)] and a structure on the scale of T_ K around the Fermi level [Figs. <ref>(a) and <ref>(b)]. In contrast to quench (A), the former have some non-negligible time-dependence at short positive times as can be seen in Fig. <ref>(c) for tT_ K=10^-4 (tΓ=10^-3), where the weight of the satellite peaks has still not equalized.
This asymmetry vanishes on time scales exceeding 1/Γ [Figs. <ref>(d) and <ref>(e) for tT_ K=1 (tΓ=10) and tT_ K=10^4 (tΓ=10^3), respectively]. The low energy structure of width T_ K, initially asymmetric and exceeding the unitary height 1/πΓ,
has significant time-dependence for t>0 and evolves into the fully developed Kondo resonance at t≳ 1/T_ K [Figs. <ref>(d) and <ref>(e)]. The deviation from the Friedel sum rule πΓ A(ω=0,t→∞)=1 is comparable to that for quench (A) and reflects the incomplete thermalization due to the discrete Wilson chain used within TDNRG. The discrete Wilson chain also results in the substructures at |ω|≲ T_ K in Figs. <ref>(c) and <ref>(d) and in the small remaining asymmetry of the fully developed Kondo resonance in Fig. <ref>(e).
From negative to positive times.—
Figures <ref>(a) and <ref>(b) show the overall time-dependence
of the spectral function for negative and positive times, respectively, for quench (A), on a linear frequency scale.
As for positive times [Fig. <ref>(a) and Fig. <ref>(b)], low and high energy structures are visible also for negative times [Fig. <ref>(a)].
Moreover, it is clear from Figs. <ref>(a) and <ref>(b) that the transition from the initial to the final state spectral function occurs on different time scales for the different structures. Consider
first the high energy structures, which carry essentially all the spectral weight. Initially, these are located at ω=±ε_i≈± 600 T_ K as is clearly visible in
Fig. <ref>(a) or in Fig. <ref>(c) for tT_ K=-10^3 (tΓ=-4× 10^4≪ -1). They cross over to their final state positions at ω=±ε_f=± 240T_ K when tT_ K≳ -10^-2 (tΓ≳ -0.4) [Figs. <ref>(a) and <ref>(e)], i.e., on the charge fluctuation time scale 1/Γ. This can also be seen in Figs. <ref>(d) and <ref>(e). This large shift in spectral weight
from ω=±ε_i to ω=±ε_f in the time-range -10^-2≲ tT_ K≲ -10^-3 (-0.4≲ tΓ≲ -0.04), clearly
seen in Fig. <ref>(a), is accompanied by small regions of negative spectral weight in this transient time range <cit.>.
This does not violate any exact results for time-dependent, as opposed to steady-state, spectral functions, and is observed in other systems <cit.>.
The spectral sum rule is satisfied analytically exactly at all times and numerically within 1% at all negative times and to higher accuracy
at positive times for all quench protocols <cit.>. Turning now to the low energy structure, i.e., the Kondo resonance, the use of a linear frequency scale now allows the initial state Kondo resonance at ω=0 to be clearly seen in Fig. <ref>(a) [see also Fig. <ref>(c)].
This structure, of width T_ K^i≪ T_ K at t→ -∞ and satisfying the Friedel sum rule πΓ A(ω=0,t→-∞)=1, gradually broadens and acquires a width of T_ K at short negative times <cit.>, and then evolves into the fully developed Kondo resonance on positive time scales tT_ K≳ 1 [Fig. <ref>(e)].
Even more interesting is the negative [Fig. <ref>(a)] to positive [Fig. <ref>(b)] time evolution of the spectral function upon quenching from the mixed valence
to the symmetric Kondo regime [quench (B)]. At large negative times [Fig. <ref>(c)], one recovers the initial state spectral function of the mixed valence regime (with ε_i=-Γ) showing a mixed valence resonance, renormalized by many-body effects to lie close to, but just above the Fermi level ε_i→ε̃_i≳ 0 and satisfying the Friedel sum rule A(0,t→ -∞)=sin^2(π n_d/2)/πΓ to within 3% <cit.>
[Figs. <ref>(a) and <ref>(c), n_d=0.675]. The upper satellite peak at ω = ε_i+U_i=7Γ≈ 70T_ K is more clearly visible in Fig. <ref>(c). These peaks give rise to the final state satellite peaks at ω=±ε_f=± 4Γ≈± 40 T_ K which start to form already at negative times tT_ K≳ -10^-1 (tΓ≳ -1), i.e., on the charge fluctuation time scale 1/Γ, as for quench (A). While the positions of these peaks start to shift to their final state values at negative times tT_ K≳ -10^-1 (tΓ≳ -1), their weights
remain disparate [see Fig. <ref>(e)] and only equalize at tT_ K≳ +10^-1 (tΓ≳ +1) as clearly seen in Fig. <ref>(b), i.e., the formation of the high energy
final state satellite peaks occurs on a fast time scale t≈ 1/Γ in the interval -1/Γ≲ t ≲ +1/Γ (dashed lines in Fig.<ref>).
Going into more details, we see in Figs. <ref>(a) and <ref>(c)-(e) the deconstruction of the mixed valence resonance in the time range -1/Γ < t <0. While this resonance carries essentially all the spectral weight at t≪ -1/Γ,
weight is gradually transferred to ω <0, with precursor oscillations starting at tT_ K=-1 (tΓ=-10) [Fig. <ref>(d)], to form the lower final state satellite peak at ω=ε_f for -1/Γ < t <0 [Fig. <ref>(e)] . Simultaneously, the mixed valence resonance narrows from its original width Γ≈ 10T_ K and shifts towards the Fermi level to form a low energy structure on the scale of T_ K [Fig. <ref>(e)]. The latter eventually evolves into the final state Kondo resonance at tT_ K≳ 1. The final state spectral function is recovered in the long-time limit tT_ K≫ 1 [Fig. <ref>(f)].
Conclusions.—
In summary, we investigated within the TDNRG the time evolution of the spectral function of the Anderson impurity model in the strong correlation limit.
Quenching into a Kondo correlated final state, we showed that the Kondo resonance in the zero temperature spectral function only fully develops at very long times t≳ 1/T_ K, although a preformed version of it is evident even at very short times t→ 0^+. The latter can be used as a smoking gun signature of the transient build up of the Kondo resonance in future cold atom realizations of the Anderson impurity model <cit.>. The satellite peaks evolve from their initial state values at negative times on a much faster time scale t≈ 1/Γ in the time-interval -1/Γ≲ t ≲ 1/Γ. Our formulation of sum rule conserving two-time nonequilibrium Green functions within TDNRG, including lesser Green functions, and their explicit dependence on both times <cit.>, yields the basic information required for applications to time-dependent quantum transport <cit.> and constitutes a first step towards using TDNRG as an impurity solver within nonequilibrium DMFT <cit.>.
H. T. M. N. thanks Hung T. Dang for fruitful discussions. We acknowledge support from the Deutsche Forschungsgemeinschaft via RTG 1995 and supercomputer support by the John von Neumann institute for Computing (Jülich). One of the authors (T. A. C.) acknowledges useful discussions with A. Rosch, J. K. Freericks and the hospitality of the Aspen Center for Physics, supported by the National Science Foundation under grant PHY-1607611, during completion of this work.
86
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Hewson(1993)]Hewson1993
author author A. C. Hewson, 10.1103/PhysRevLett.70.4007 journal journal Phys. Rev. Lett. volume 70, pages 4007 (year
1993)NoStop
[Wilson(1975)]Wilson1975
author author K. G. Wilson, 10.1103/RevModPhys.47.773 journal
journal Rev. Mod. Phys. volume 47, pages 773 (year 1975)NoStop
[Krishna-murthy et al.(1980)Krishna-murthy, Wilkins, and Wilson]KWW1980a
author author H. R. Krishna-murthy, author J. W. Wilkins, and author K. G. Wilson, 10.1103/PhysRevB.21.1003 journal
journal Phys. Rev. B volume 21, pages 1003 (year 1980)NoStop
[Bulla et al.(2008)Bulla,
Costi, and Pruschke]Bulla2008
author author R. Bulla, author T. A. Costi, and author T. Pruschke, 10.1103/RevModPhys.80.395 journal journal Rev. Mod. Phys. volume 80, pages 395 (year 2008)NoStop
[Gonzalez-Buxton and Ingersent(1998)]Gonzalez-Buxton1998
author author C. Gonzalez-Buxton and author K. Ingersent, 10.1103/PhysRevB.57.14254 journal journal Phys. Rev. B volume
57, pages 14254 (year 1998)NoStop
[Gull et al.(2011)Gull,
Millis, Lichtenstein, Rubtsov, Troyer, and Werner]Gull2011b
author author E. Gull, author A. J. Millis,
author A. I. Lichtenstein,
author A. N. Rubtsov, author M. Troyer, and author
P. Werner, 10.1103/RevModPhys.83.349 journal journal Rev.
Mod. Phys. volume 83, pages 349
(year 2011)NoStop
[White(1992)]White1992
author author S. R. White, 10.1103/PhysRevLett.69.2863 journal journal Phys. Rev. Lett. volume 69, pages 2863 (year
1992)NoStop
[Tsvelick and Wiegmann(1983)]Tsvelick1983b
author author A. M. Tsvelick and author P. B. Wiegmann, 10.1080/00018738300101581 journal journal Advances in Physics volume 32, pages 453 (year 1983)NoStop
[Andrei(2013)]Andrei2013
author author N. Andrei, title Integrable models in condensed
matter physics, in 10.1142/9789814447027_0008 booktitle Low-Dimensional Quantum Field Theories for Condensed
Matter Physicists (publisher World Scientific Publishing
Co, year 2013) pp. pages 457–551NoStop
[Brako and Newns(1981)]Brako1981
author author R. Brako and author D. Newns, http://dx.doi.org/10.1016/0039-6028(81)90448-9 journal journal Surface Science volume
108, pages 253 (year 1981)NoStop
[Kasai and Okiji(1987)]Kasai1987
author author H. Kasai and author A. Okiji, http://dx.doi.org/10.1016/S0039-6028(87)80341-2 journal journal Surface Science volume
183, pages 147 (year 1987)NoStop
[Merino and Marston(1998)]Merino1998
author author J. Merino and author J. B. Marston, 10.1103/PhysRevB.58.6982 journal
journal Phys. Rev. B volume 58, pages 6982 (year 1998)NoStop
[Langreth and Nordlander(1991)]Langreth1991
author author D. C. Langreth and author P. Nordlander, 10.1103/PhysRevB.43.2541 journal journal Phys. Rev. B volume
43, pages 2541 (year 1991)NoStop
[Shao et al.(1994a)Shao, Langreth, and Nordlander]Shao1994a
author author H. Shao, author D. C. Langreth,
and author P. Nordlander, 10.1103/PhysRevB.49.13929 journal journal Phys. Rev. B volume 49, pages 13929 (year 1994a)NoStop
[Shao et al.(1994b)Shao, Langreth, and Nordlander]Shao1994b
author author H. Shao, author D. C. Langreth,
and author P. Nordlander, 10.1103/PhysRevB.49.13948 journal journal Phys. Rev. B volume 49, pages 13948 (year 1994b)NoStop
[Pamperin et al.(2015)Pamperin, Bronold, and Fehske]Pamperin2015
author author M. Pamperin, author F. X. Bronold, and author H. Fehske, 10.1103/PhysRevB.91.035440 journal journal Phys. Rev. B volume
91, pages 035440 (year 2015)NoStop
[He and Yarmoff(2010)]He2010
author author X. He and author J. A. Yarmoff, 10.1103/PhysRevLett.105.176806 journal journal Phys. Rev. Lett. volume 105, pages 176806 (year
2010)NoStop
[Hershfield et al.(1992)Hershfield, Davies, and Wilkins]Hershfield1992
author author S. Hershfield, author J. H. Davies, and author J. W. Wilkins, 10.1103/PhysRevB.46.7046 journal
journal Phys. Rev. B volume 46, pages 7046 (year 1992)NoStop
[Hershfield(1993)]Hershfield1993
author author S. Hershfield, 10.1103/PhysRevLett.70.2134 journal journal Phys. Rev. Lett. volume 70, pages 2134 (year
1993)NoStop
[Meir et al.(1993)Meir,
Wingreen, and Lee]Meir1993
author author Y. Meir, author N. S. Wingreen,
and author P. A. Lee, 10.1103/PhysRevLett.70.2601 journal journal Phys. Rev. Lett. volume 70, pages 2601 (year 1993)NoStop
[Bruder and Schoeller(1994)]Bruder1994
author author C. Bruder and author H. Schoeller, 10.1103/PhysRevLett.72.1076 journal journal Phys. Rev. Lett. volume 72, pages 1076 (year
1994)NoStop
[Kretinin et al.(2011)Kretinin, Shtrikman, Goldhaber-Gordon,
Hanl, Weichselbaum, von
Delft, Costi, and Mahalu]Kretinin2011
author author A. V. Kretinin, author H. Shtrikman,
author D. Goldhaber-Gordon,
author M. Hanl, author
A. Weichselbaum, author
J. von Delft, author
T. Costi, and author
D. Mahalu, 10.1103/PhysRevB.84.245316 journal journal Phys.
Rev. B volume 84, pages 245316
(year 2011)NoStop
[Kretinin et al.(2012)Kretinin, Shtrikman, and Mahalu]Kretinin2012
author author A. V. Kretinin, author H. Shtrikman,
and author D. Mahalu, 10.1103/PhysRevB.85.201301 journal journal Phys. Rev. B volume 85, pages 201301 (year 2012)NoStop
[Pletyukhov and Schoeller(2012)]Pletyukhov2012
author author M. Pletyukhov and author H. Schoeller, 10.1103/PhysRevLett.108.260601 journal journal Phys. Rev. Lett. volume 108, pages 260601 (year
2012)NoStop
[Scott et al.(2013)Scott,
Natelson, Kirchner, and Muñoz]Scott2013
author author G. D. Scott, author D. Natelson,
author S. Kirchner, and author E. Muñoz, 10.1103/PhysRevB.87.241104 journal journal Phys. Rev. B volume 87, pages 241104 (year 2013)NoStop
[Nordlander et al.(1999)Nordlander, Pustilnik, Meir, Wingreen, and Langreth]Nordlander1999
author author P. Nordlander, author M. Pustilnik, author Y. Meir,
author N. S. Wingreen, and author D. C. Langreth, 10.1103/PhysRevLett.83.808 journal journal Phys. Rev. Lett. volume 83, pages 808 (year 1999)NoStop
[Park et al.(2002)Park,
Pasupathy, Goldsmith, Chang,
Yaish, Petta, Rinkoski,
Sethna, Abruna, McEuen, and Ralph]Park2002
author author J. Park, author A. Pasupathy,
author J. Goldsmith, author C. Chang, author
Y. Yaish, author J. Petta, author M. Rinkoski, author J. Sethna, author H. Abruna, author P. McEuen, and author D. Ralph, 10.1038/nature00791 journal journal Nature volume
417, pages 722 (year 2002)NoStop
[Kogan et al.(2004)Kogan,
Amasha, and Kastner]Kogan2004
author author A. Kogan, author S. Amasha, and author M. A. Kastner, 10.1126/science.1096377 journal journal Science volume 304, pages
1293 (year 2004)NoStop
[Hemingway et al.(2014)Hemingway, Herbert, Melloch, and Kogan]Hemingway2014
author author B. Hemingway, author S. Herbert,
author M. Melloch, and author A. Kogan, 10.1103/PhysRevB.90.125151 journal journal Phys. Rev. B volume 90, pages 125151 (year 2014)NoStop
[Jauho et al.(1994)Jauho,
Wingreen, and Meir]Jauho1994
author author A.-P. Jauho, author N. S. Wingreen,
and author Y. Meir, 10.1103/PhysRevB.50.5528 journal journal Phys. Rev. B volume 50, pages 5528 (year 1994)NoStop
[Kennes et al.(2012)Kennes,
Jakobs, Karrasch, and Meden]Kennes2012a
author author D. M. Kennes, author S. G. Jakobs,
author C. Karrasch, and author V. Meden, 10.1103/PhysRevB.85.085113 journal journal Phys. Rev. B volume 85, pages 085113 (year 2012)NoStop
[Cohen et al.(2014)Cohen,
Gull, Reichman, and Millis]Cohen2014a
author author G. Cohen, author E. Gull,
author D. R. Reichman, and author A. J. Millis, 10.1103/PhysRevLett.112.146802 journal journal Phys. Rev. Lett. volume 112, pages 146802 (year 2014)NoStop
[Schmidt et al.(2008)Schmidt, Werner, Mühlbacher, and Komnik]Schmidt2008
author author T. L. Schmidt, author P. Werner,
author L. Mühlbacher, and author A. Komnik, 10.1103/PhysRevB.78.235110 journal journal Phys. Rev. B volume 78, pages 235110 (year 2008)NoStop
[Dorda et al.(2014)Dorda,
Nuss, von der Linden, and Arrigoni]Dorda2014
author author A. Dorda, author M. Nuss,
author W. von der Linden, and author E. Arrigoni, 10.1103/PhysRevB.89.165105 journal journal Phys. Rev. B volume 89, pages 165105 (year 2014)NoStop
[Rosch et al.(2003)Rosch,
Paaske, Kroha, and Wölfle]Rosch2003a
author author A. Rosch, author J. Paaske,
author J. Kroha, and author P. Wölfle, 10.1103/PhysRevLett.90.076804 journal journal
Phys. Rev. Lett. volume 90, pages
076804 (year 2003)NoStop
[Antipov et al.(2016)Antipov, Dong, and Gull]Antipov2016
author author A. E. Antipov, author Q. Dong, and author E. Gull, 10.1103/PhysRevLett.116.036801 journal journal Phys. Rev. Lett. volume 116, pages 036801 (year 2016)NoStop
[Metzner and Vollhardt(1989)]Metzner1989
author author W. Metzner and author D. Vollhardt, 10.1103/PhysRevLett.62.324 journal journal Phys. Rev. Lett. volume 62, pages 324 (year 1989)NoStop
[Georges et al.(1996)Georges, Kotliar, Krauth, and Rozenberg]Georges1996
author author A. Georges, author G. Kotliar,
author W. Krauth, and author M. J. Rozenberg, 10.1103/RevModPhys.68.13 journal journal Rev.
Mod. Phys. volume 68, pages 13
(year 1996)NoStop
[Kotliar and Vollhardt(2004)]Kotliar2004
author author G. Kotliar and author D. Vollhardt, 10.1063/1.1712502 journal
journal Physics Today volume 57, pages 53 (year 2004)NoStop
[Schmidt and Monien(2002)]Schmidt2002
author author P. Schmidt and author H. Monien, @noop journal journal
eprint arXiv:cond-mat/0202046 (year 2002), http://arxiv.org/abs/cond-mat/0202046 cond-mat/0202046 NoStop
[Freericks et al.(2006)Freericks, Turkowski, and Zlati ćć]Freericks2006
author author J. K. Freericks, author V. M. Turkowski, and author V. Zlati ćć, 10.1103/PhysRevLett.97.266408 journal journal
Phys. Rev. Lett. volume 97, pages
266408 (year 2006)NoStop
[Aoki et al.(2014)Aoki,
Tsuji, Eckstein, Kollar,
Oka, and Werner]Aoki2014
author author H. Aoki, author N. Tsuji,
author M. Eckstein, author M. Kollar, author
T. Oka, and author
P. Werner, 10.1103/RevModPhys.86.779 journal journal Rev.
Mod. Phys. volume 86, pages 779
(year 2014)NoStop
[Eckstein and Kollar(2008)]Eckstein2008
author author M. Eckstein and author M. Kollar, 10.1103/PhysRevB.78.205119 journal journal Phys. Rev. B volume
78, pages 205119 (year 2008)NoStop
[Freericks et al.(2009)Freericks, Krishnamurthy, and Pruschke]Freericks2009
author author J. K. Freericks, author H. R. Krishnamurthy, and author T. Pruschke, 10.1103/PhysRevLett.102.136401 journal journal Phys. Rev. Lett. volume 102, pages 136401 (year
2009)NoStop
[Perfetti et al.(2006)Perfetti, Loukakos, Lisowski, Bovensiepen, Berger, Biermann,
Cornaglia, Georges, and Wolf]Perfetti2006
author author L. Perfetti, author P. A. Loukakos, author M. Lisowski,
author U. Bovensiepen, author H. Berger, author
S. Biermann, author
P. S. Cornaglia, author
A. Georges, and author
M. Wolf, 10.1103/PhysRevLett.97.067402 journal journal
Phys. Rev. Lett. volume 97, pages
067402 (year 2006)NoStop
[Loukakos et al.(2007)Loukakos, Lisowski, Bihlmayer,
Blügel, Wolf, and Bovensiepen]Loukakos2007
author author P. A. Loukakos, author M. Lisowski,
author G. Bihlmayer, author S. Blügel, author
M. Wolf, and author
U. Bovensiepen, 10.1103/PhysRevLett.98.097401 journal journal
Phys. Rev. Lett. volume 98, pages
097401 (year 2007)NoStop
[Iyoda and Ishihara(2014)]Iyoda2014
author author E. Iyoda and author S. Ishihara, 10.1103/PhysRevB.89.125126 journal journal Phys. Rev. B volume
89, pages 125126 (year 2014)NoStop
[Loth et al.(2010)Loth,
Etzkorn, Lutz, Eigler, and Heinrich]Loth2010
author author S. Loth, author M. Etzkorn,
author C. P. Lutz, author D. M. Eigler, and author A. J. Heinrich, 10.1126/science.1191688 journal journal
Science volume 329, pages 1628
(year 2010)NoStop
[Nishida(2013)]Nishida2013
author author Y. Nishida, 10.1103/PhysRevLett.111.135301 journal journal Phys. Rev. Lett. volume 111, pages 135301 (year
2013)NoStop
[Bauer et al.(2013)Bauer,
Salomon, and Demler]Bauer2013
author author J. Bauer, author C. Salomon, and author E. Demler, 10.1103/PhysRevLett.111.215304 journal journal Phys. Rev. Lett. volume 111, pages 215304 (year 2013)NoStop
[Nishida(2016)]Nishida2016
author author Y. Nishida, 10.1103/PhysRevA.93.011606 journal journal Phys. Rev. A volume
93, pages 011606 (year 2016)NoStop
[Riegger et al.(2017)Riegger, Darkwah Oppong, Höfer,
Rio Fernandes, Bloch, and Fölling]Riegger2017
author author L. Riegger, author N. Darkwah
Oppong, author M. Höfer,
author D. Rio Fernandes,
author I. Bloch, and author S. Fölling, @noop journal journal ArXiv e-prints
(year 2017), http://arxiv.org/abs/1708.03810
arXiv:1708.03810 NoStop
[Goold et al.(2011)Goold,
Fogarty, Lo Gullo, Paternostro, and Busch]Goold2011
author author J. Goold, author T. Fogarty,
author N. Lo Gullo, author M. Paternostro, and author T. Busch, 10.1103/PhysRevA.84.063632 journal journal Phys.
Rev. A volume 84, pages 063632
(year 2011)NoStop
[Knap et al.(2012)Knap,
Shashi, Nishida, Imambekov,
Abanin, and Demler]Knap2012
author author M. Knap, author A. Shashi,
author Y. Nishida, author A. Imambekov, author
D. A. Abanin, and author
E. Demler, 10.1103/PhysRevX.2.041020 journal journal Phys.
Rev. X volume 2, pages 041020
(year 2012)NoStop
[Cetina et al.(2016)Cetina,
Jag, Lous, Fritsche,
Walraven, Grimm, Levinsen,
Parish, Schmidt, Knap, and Demler]Cetina2016
author author M. Cetina, author M. Jag,
author R. S. Lous, author I. Fritsche, author
J. T. M. Walraven, author
R. Grimm, author J. Levinsen, author M. M. Parish, author R. Schmidt, author M. Knap, and author E. Demler, 10.1126/science.aaf5134 journal journal
Science volume 354, pages 96
(year 2016)NoStop
[Anders and Schiller(2005)]Anders2005
author author F. B. Anders and author A. Schiller, 10.1103/PhysRevLett.95.196801 journal journal Phys. Rev. Lett. volume 95, pages 196801 (year
2005)NoStop
[Anders and Schiller(2006)]Anders2006
author author F. B. Anders and author A. Schiller, 10.1103/PhysRevB.74.245113 journal journal Phys. Rev. B volume
74, pages 245113 (year 2006)NoStop
[Anders(2008a)]Anders2008a
author author F. B. Anders, 10.1103/PhysRevLett.101.066804 journal journal Phys. Rev. Lett. volume 101, pages 066804 (year
2008a)NoStop
[Anders(2008b)]Anders2008b
author author F. B. Anders, 10.1088/0953-8984/20/19/195216 journal journal Journal of Physics-Condensed Matter volume 20, pages 195216 (year 2008b)NoStop
[Güttge et al.(2013)Güttge, Anders, Schollwöck,
Eidelstein, and Schiller]Guettge2013
author author F. Güttge, author F. B. Anders, author U. Schollwöck, author E. Eidelstein, and author A. Schiller, 10.1103/PhysRevB.87.115115 journal journal Phys. Rev. B volume
87, pages 115115 (year 2013)NoStop
[Nghiem and Costi(2014a)]Nghiem2014a
author author H. T. M. Nghiem and author T. A. Costi, 10.1103/PhysRevB.89.075118
journal journal Phys. Rev. B volume 89, pages 075118 (year
2014a)NoStop
[Nghiem and Costi(2014b)]Nghiem2014b
author author H. T. M. Nghiem and author T. A. Costi, 10.1103/PhysRevB.90.035129
journal journal Phys. Rev. B volume 90, pages 035129 (year
2014b)NoStop
[Randi et al.(2017)Randi,
Fausti, and Eckstein]Randi2017
author author F. Randi, author D. Fausti, and author M. Eckstein, 10.1103/PhysRevB.95.115132 journal journal Phys. Rev. B volume 95, pages 115132 (year 2017)NoStop
[Bock et al.(2016)Bock,
Liluashvili, and Gasenzer]Bock2016
author author S. Bock, author A. Liluashvili, and author T. Gasenzer, 10.1103/PhysRevB.94.045108 journal journal Phys. Rev. B volume 94, pages 045108 (year 2016)NoStop
[Lobaskin and Kehrein(2005)]Lobaskin2005
author author D. Lobaskin and author S. Kehrein, 10.1103/PhysRevB.71.193303 journal journal Phys. Rev. B volume
71, pages 193303 (year 2005)NoStop
[Heyl and Kehrein(2010)]Heyl2010
author author M. Heyl and author S. Kehrein, 10.1103/PhysRevB.81.144301 journal journal Phys. Rev. B volume 81, pages 144301 (year 2010)NoStop
[Weymann et al.(2015)Weymann, von Delft, and Weichselbaum]Weymann2015
author author I. Weymann, author J. von Delft,
and author A. Weichselbaum, 10.1103/PhysRevB.92.155435 journal journal Phys. Rev. B volume 92, pages 155435 (year 2015)NoStop
[Note1()]Note1
note The use of a discretized Wilson chain within TDNRG results,
to a small degree, in incomplete thermalization at t→∞ in
thermodynamic observables <cit.>, and in spectral functions
<cit.>, but it suffices for a consistent
nonperturbative picture of the overall time evolution of A(ω
,t).Stop
[Note2()]Note2
note We use the Bethe ansatz expression T_
K=√(Γ U/2)e^-π U/8Γ + πΓ /2U valid in
the symmetric Kondo limit U/πΓ≫ 1 <cit.>.Stop
[Weichselbaum and von
Delft(2007)]Weichselbaum2007
author author A. Weichselbaum and author J. von
Delft, 10.1103/PhysRevLett.99.076402 journal journal Phys. Rev. Lett. volume 99, pages 076402 (year
2007)NoStop
[Peters et al.(2006)Peters,
Pruschke, and Anders]Peters2006
author author R. Peters, author T. Pruschke, and author F. B. Anders, 10.1103/PhysRevB.74.245114 journal journal Phys. Rev. B volume 74, pages 245114 (year 2006)NoStop
[Costi and Zlati ćć(2010)]Costi2010
author author T. A. Costi and author V. Zlati ćć, 10.1103/PhysRevB.81.235127 journal journal Phys.
Rev. B volume 81, pages 235127
(year 2010)NoStop
[Note3()]Note3
note See Supplementary Material [URL] for derivations and
additional results, including Ref. @citealp
Bulla2001.Stop
[Oliveira and Oliveira(1994)]Oliveira1994
author author W. C. Oliveira and author L. N. Oliveira, 10.1103/PhysRevB.49.11986 journal journal Phys. Rev. B volume
49, pages 11986 (year 1994)NoStop
[Campo and Oliveira(2005)]Campo2005
author author V. L. Campo and author L. N. Oliveira, 10.1103/PhysRevB.72.104432 journal journal Phys. Rev. B volume
72, pages 104432 (year 2005)NoStop
[Rosch(2012)]Rosch2012
author author A. Rosch, 10.1140/epjb/e2011-20880-7 journal
journal The European Physical Journal B-Condensed Matter and
Complex Systems volume 85, pages 1
(year 2012)NoStop
[Bulla et al.(1998)Bulla,
Hewson, and Pruschke]Bulla1998
author author R. Bulla, author A. C. Hewson, and author T. Pruschke, http://stacks.iop.org/0953-8984/10/i=37/a=021 journal
journal Journal of Physics: Condensed Matter volume 10, pages 8365 (year
1998)NoStop
[Eidelstein et al.(2012)Eidelstein, Schiller, Güttge, and Anders]Eidelstein2012
author author E. Eidelstein, author A. Schiller, author F. Güttge,
and author F. B. Anders, 10.1103/PhysRevB.85.075118 journal journal Phys. Rev. B volume 85, pages 075118 (year 2012)NoStop
[Turkowski and Freericks(2005)]Turkowski2005
author author V. Turkowski and author J. K. Freericks, 10.1103/PhysRevB.71.085104 journal journal Phys. Rev. B volume
71, pages 085104 (year 2005)NoStop
[Dirks et al.(2013)Dirks,
Eckstein, Pruschke, and Werner]Dirks2013
author author A. Dirks, author M. Eckstein,
author T. Pruschke, and author P. Werner, 10.1103/PhysRevE.87.023305 journal journal Phys. Rev. E volume 87, pages 023305 (year 2013)NoStop
[Freericks and Turkowski(2009)]Freericks2009b
author author J. K. Freericks and author V. Turkowski, 10.1103/PhysRevB.80.115119 journal journal Phys. Rev. B volume
80, pages 115119 (year 2009)NoStop
[Costi et al.(1996)Costi,
Kroha, and Wölfle]Costi1996b
author author T. A. Costi, author J. Kroha, and author P. Wölfle, 10.1103/PhysRevB.53.1850 journal journal Phys. Rev. B volume 53, pages 1850 (year 1996)NoStop
[Gramsch et al.(2013)Gramsch, Balzer, Eckstein, and Kollar]Gramsch2013
author author C. Gramsch, author K. Balzer,
author M. Eckstein, and author M. Kollar, 10.1103/PhysRevB.88.235106 journal journal Phys. Rev. B volume 88, pages 235106 (year 2013)NoStop
[Nghiem et al.(2016)Nghiem,
Kennes, Klöckner, Meden, and Costi]Nghiem2016
author author H. T. M. Nghiem, author D. M. Kennes, author C. Klöckner, author V. Meden, and author T. A. Costi, 10.1103/PhysRevB.93.165130 journal journal Phys. Rev. B volume 93, pages 165130 (year 2016)NoStop
[Zlati ćć and Horvati ćć(1983)]Zlatic1983
author author V. Zlati ćć and author
B. Horvati ćć, 10.1103/PhysRevB.28.6904 journal journal Phys. Rev. B volume
28, pages 6904 (year 1983)NoStop
[Bulla et al.(2001)Bulla,
Costi, and Vollhardt]Bulla2001
author author R. Bulla, author T. A. Costi, and author D. Vollhardt, 10.1103/PhysRevB.64.045103 journal journal Phys. Rev. B volume 64, pages 045103 (year 2001)NoStop
Supplementary Material on “
Time evolution of the Kondo resonance in response to a quench ”
In this supplementary material, we derive numerically tractable expressions
for the retarded two-time Green function, G(t+t',t), and the associated
time-dependent spectral function, A(ω,t), within the single
quench TDNRG for both positive (t>0) and negative (t<0) times.
For positive times, we compare our numerically tractable expression, obtained within the
full density matrix approach <cit.>, to a numerically
more time-consuming expression obtained for positive times only in Ref. S_Anders2008b and
compare spectral densities from the two approaches at selected times
for the Anderson impurity model.
The t→±∞ and t→ 0^± limits of A(ω,t), are
discussed and we prove that our expressions for A(ω,t) satisfy
the spectral weight sum rule ∫_-∞^+∞dω
A(ω,t)=1 exactly analytically. Details of the numerical
evaluation of the spectral functions is given and we show that the
numerical error for the spectral weight sum rule lies below ≈ 1% for all quench
protocols studied. The effect of the discrete Wilson chain on the
Friedel sum rule and the time evolution are discussed. For
completeness, we show results
for the reverse of quench (B) in the main text,
and make comparisons with non-equilibrium
non-crossing approximation results at finite low temperature
<cit.> and with a hybridization quench <cit.>.
The expresion for the lesser Green function is given and used to calculate the time
dependence of the local occupation number of the Anderson
model. Finally, the explicit dependence of the retarded Green function
on its two time arguments is illustrated numerically.
§ RETARDED TWO-TIME NONEQUILIBRIUM GREEN FUNCTION IN TDNRG
We consider the retarded two-time Green function
G_BC(t+t',t)=-iθ(t')Tr{ρ̂[B̂(t+t'),Ĉ(t)]_s}
for a system undergoing a quantum quench at t=0 as described by the
time-dependent Hamiltonian H(t)=(1-θ(t))H_i+θ(t)H_f,
with ρ̂=e^-β H_i the density matrix of the initial state,
represented by the full density matrix in Eq. (<ref>) <cit.>. Since
t'>0, we have two cases to consider, (i), t>0, in which case both operators
B and C evolve with respect to H_f, and, (ii), t<0, in
which case, either both operators evolve with respect to H_i if
t<t+t'<0, or if t<0<t+t' operator C evolves with respect to
H_i, while operator B evolves with respect to H_f. While the
Fourier transform with respect to the time difference t' yielding
G(ω+iη,t) and hence A(ω,t) is straightforward in
case (i), in case (ii), expressions for G(t+t',t) are needed from both time
domains t<t+t'<0 and t<0<t+t' in order to construct
G(ω+iη,t<0) and hence A(ω,t<0). Depending on the
physical system considered, both positive and negative times may be of
interest. Thus, in problems where the quench represents an initial
state preparation, the main interest is in the evolution at t>0
following this preparation <cit.>. However, if the quench is considered to be
a perturbation applied to the system at time t=0, the full
time evolution is of interest. This case is also required for
applications to nonequilibrium dynamical field theory
(DMFT)<cit.>.
From a theoretical point of view, t<0 is also of interest to fully describe the evolution of the spectral
function A(ω,t) from its initial state value at t=-∞
to its final state value at t=+∞.
§.§ Positive time-dependence t>0
We first consider the case t>0, treating t<0 in the
next subsection. We have for the retarded Green function,
G_BC(t+t',t) =-iθ(t')Tr{ρ̂[B̂(t+t'),Ĉ(t)]_s}
=-iθ(t')Tr{ρ̂[e^iH_f(t+t')B̂e^-iH_f(t+t'),e^iH_ftĈe^-iH_ft]_s}
=-iθ(t')Tr{e^-iH_ftρ̂e^iH_ft[e^iH_ft'B̂e^-iH_ft',C]_s}
=-iθ(t')Tr{ρ̂(t)[B̂(t'),Ĉ]_s},
where s=± 1 for fermionic/bosonic Green functions, respectively.
In the notation of Ref. S_Nghiem2014a and following the
approach of Anders <cit.>, we have for the first (BC) term of the anticommutator with t>0
I_1(t+t',t)=- iTr{ρ̂(t)B̂(̂t̂'̂)̂Ĉ}
=- i∑_m=m_0^N∑_le_f⟨ lem|ρ̂(t)B̂(t')Ĉ|lem⟩_f
=- i∑_m=m_0^N∑_rs^∉ KK'∑_ee'_f⟨ sem|ρ̂(t)|re'm⟩_f_f⟨ re'm|B̂(t')Ĉ|sem⟩_f,
where |lem⟩, m=m_0,…,N is the complete set of eliminated
states in the NRG diagonalization procedure, with m_0 the first
iteration at which states are eliminated and N is the last NRG
iteration (see Refs. <cit.> for details).
We evaluate _f⟨ re'm|B̂(t')Ĉ|sem⟩_f by
inserting the decomposition of unity <cit.> I=I_m^+ +I_m^- between B̂(t') and Ĉ,
_f⟨ re'm|B̂(t')Ĉ|sem⟩_f =_f⟨ re'm|B̂(t')(I_m^+ +I_m^-)Ĉ|sem⟩_f
=∑_qe”_f⟨ re'm|B̂(t')|qe”m⟩_f_f⟨ qe”m|Ĉ|sem⟩_f
+∑_m'=m_0^m-1∑_le”_f⟨ re'm|B̂(t')|le”m'⟩_f_f⟨ le”m'|Ĉ|sem⟩_f.
The first term in the above expression is diagonal in the environment
variables e, e' , and q runs over all states (kept and
discarded) at the shell m. We put 1=1_m'^+ +1_m'^- in the last term to obtain
∑_m'=m_0^m-1∑_le”_f⟨ re'm|(1_m'^+ +1_m'^-)B̂(t')|le”m'⟩_f_f⟨ le”m'|Ĉ(1_m'^+ +1_m'^-)|sem⟩_f
= ∑_m'=m_0^m-1∑_le”∑_k_1e_1,k_2e_2_f⟨ re'm|k_1e_1m'⟩_f_f⟨ k_1e_1m'|B̂(t')|le”m'⟩_f_f⟨ le”m'|Ĉ|k_2e_2m'⟩_f_f⟨ k_2e_2m'|sem⟩_f.
Substituting (<ref>) into (<ref>) and using the NRG approximation, we have
_f⟨ re'm|B̂(t')Ĉ|sem⟩_f= ∑_qe^i(E^m_r-E^m_q)t'B^m_rqδ_ee'C^m_qs+
+ ∑_m'=m_0^m-1∑_le”∑_k_1k_2_f⟨ re'm|k_1e”m'⟩_f B^m'_k_1le^i(E^m'_k_1-E^m'_l)t' C^m'_lk_2_f⟨ k_2e”m'|sem⟩_f.
Substituting _f⟨
re'm|k_1e”m'⟩_f=δ_e'_me”_m[A^α_m†_XK...A^α_m'+1†_KK]_rk_1
and _f⟨
k_2e”m'|sem⟩_f=δ_e”_me_m[A^α_m'+1_KK...A^α_m_KX']_k_2s
into Eq. (<ref>), results in the following expression for Eq. (<ref>)
I_1(t+t',t)= -i∑_m=m_0^N∑_rs^∉ KK'e^-i(E^m_s-E^m_r)t∑_e_f⟨ sem|ρ|rem⟩_f×{∑_qe^i(E^m_r-E^m_q)t'B^m_rqC^m_qs
+∑_m'=m_0^m-1∑_lk_1k_2∑_α_m...α_m'+1[A^α_m†_XK...A^α_m'+1†_KK]_rk_1 B^m'_k_1le^i(E^m'_k_1-E^m'_l)t' C^m'_lk_2 [A^α_m'+1_KK...A^α_m_KX']_k_2s},
in which ρ^i→ f_s,r(m)=∑_e_f⟨ sem|ρ|rem⟩_f
is the projected full reduced density matrix known from
Ref. S_Nghiem2014a. Fourier transforming the above equation
with respect to t' gives
I_1(ω+iη,t)= ∑_m=m_0^N∑_rs^∉ KK'e^-i(E^m_s-E^m_r)tρ^i→ f_s,r(m)×{∑_qB^m_rqC^m_qs/ω+E^m_r-E^m_q+iη
+∑_m'=m_0^m-1∑_lk_1k_2∑_α_m...α_m'+1[A^α_m†_XK...A^α_m'+1†_KK]_rk_1B^m'_k_1lC^m'_lk_2/ω+E^m'_k_1-E^m'_l+iη[A^α_m'+1_KK...A^α_m_KX']_k_2s}.
with η is a positive infinitesimal. Similarly, the second (CB) term of
the anticommutator in (<ref>) gives us
I_2(ω+iη,t)= ∑_m=m_0^N∑_rs^∉ KK'e^-i(E^m_s-E^m_r)tρ^i→ f_s,r(m)×{∑_qC^m_rqB^m_qs/ω+E^m_q-E^m_s+iη
+∑_m'=m_0^m-1∑_lk_1k_2∑_α_m...α_m'+1[A^α_m†_XK...A^α_m'+1†_KK]_rk_1C^m'_k_1lB^m'_lk_2/ω+E^m'_l-E^m'_k_2+iη[A^α_m'+1_KK...A^α_m_KX']_k_2s}.
Hence, for positive times we obtain
G(ω,t)=I_1(ω,t)+I_2(ω,t).
In order to calculate the time-dependent spectral function from the above, one can
follow Anders <cit.> by defining the following
time-dependent density matrix
ρ̃^i→ f_k_2k_1(m',t)=∑_m=m'+1^N∑^∉ KK'_rs∑_α_m...α_m'+1[A^α_m'+1_KK...A^α_m_KX']_k_2se^-i(E^m_s-E^m_r)tρ^i→ f_s,r(m)[A^α_m†_XK...A^α_m'+1†_KK]_rk_1
=
0 if m'=N;
∑_α_m'+1{∑^∉ KK'_rsA^α_m'+1_k_2se^-i(E^m'+1_s-E^m'+1_r)tρ^i→ f_sr(m'+1)A^α_m'+1†_rk_1+∑_kk'A^α_m'+1_k_2kρ̃^i→ f_kk'(m'+1,t)A^α_m'+1†_k'k_1} otherwise.
Then we have the following expression for the Green's function
G(ω,t)= ∑_m=m_0^N∑_rs^∉ KK'e^-i(E^m_s-E^m_r)tρ^i→ f_s,r(m)×∑_q{B^m_rqC^m_qs/ω+E^m_r-E^m_q+iη+C^m_rqB^m_qs/ω+E^m_q-E^m_s+iη}
+∑_m'=m_0^N-1∑_k_1k_2ρ̃^i→ f_k_2k_1(m',t)∑_l{B^m'_k_1lC^m'_lk_2/ω+E^m'_k_1-E^m'_l+iη+C^m'_k_1lB^m'_lk_2/ω+E^m'_l-E^m'_k_2+iη},
from which the time-dependent spectral function can be
calculated. This expression of Anders <cit.>, generalized
here within the full density matrix approach, and hence valid at arbitrary
temperature <cit.>, requires, however, the time-dependent
reduced density matrix ρ̃^i→ f_k_2k_1(m',t) at each
point in time, and the latter is in turn obtained via the recursion relation
in Eq. (<ref>), resulting in a numerically highly time consuming
calculation. This motivated us to develop an alternative and
numerically more tractable expression for the retarded two-time Green function, to which we now turn.
A different and numerically more feasible expression for
G(ω,t>0) than the expression above, can be obtained if we go back to
Eq. (<ref>) and substitute this into Eq. (<ref>),
I_1(t+t',t)=- i∑_m=m_0^N∑_rs^∉ KK'∑_ee'_f⟨ sem|ρ̂(t)|re'm⟩_f×(∑_qe^i(E^m_r-E^m_q)t'B^m_rqδ_ee'C^m_qs+
+ ∑_m'=m_0^m-1∑_le”∑_k_1k_2_f⟨ re'm|k_1e”m'⟩_f B^m'_k_1le^i(E^m'_k_1-E^m'_l)t' C^m'_lk_2_f⟨ k_2e”m'|sem⟩_f)
.
The first term in this expression is simply
-i∑_m=m_0^N∑_rs^∉ KK'∑_e_f⟨ sem|ρ̂(t)|rem⟩_f∑_qe^i(E^m_r-E^m_q)t'B^m_rqC^m_qs,
and the second term can be rearranged as follows
-i∑_m=m_0^N∑_rs^∉ KK'∑_ee'_f⟨ sem|ρ̂(t)|re'm⟩_f∑_m'=m_0^m-1∑_le”∑_k_1k_2_f⟨ re'm|k_1e”m'⟩_f B^m'_k_1le^i(E^m'_k_1-E^m'_l)t' C^m'_lk_2_f⟨ k_2e”m'|sem⟩_f
= -i∑_m'=m_0^N-1∑_le”∑_k_1k_2∑_m=m'+1^N∑_rs^∉ KK'∑_ee'_f⟨ k_2e”m'|sem⟩_f_f⟨ sem|ρ̂(t)|re'm⟩_f_f⟨ re'm|k_1e”m'⟩_f B^m'_k_1le^i(E^m'_k_1-E^m'_l)t' C^m'_lk_2
= -i∑_m'=m_0^N-1∑_le”∑_k_1k_2∑_kk'∑_ee'_f⟨ k_2e”m'|kem'⟩_f_f⟨ kem'|ρ̂(t)|k'e'm'⟩_f_f⟨ k'e'm'|k_1e”m'⟩_f B^m'_k_1le^i(E^m'_k_1-E^m'_l)t' C^m'_lk_2
= -i∑_m'=m_0^N-1∑_le∑_kk'_f⟨ kem'|ρ̂(t)|k'em'⟩_f B^m'_k'le^i(E^m'_k'-E^m'_l)t'C^m'_lk.
Combining the above two terms, we have
I_1(t+t',t)= -i∑_m=m_0^N∑_rs^∉ KK'∑_e_f⟨ sem|ρ̂(t)|rem⟩_f∑_qe^i(E^m_r-E^m_q)t'B^m_rqC^m_qs
-i∑_m=m_0^N-1∑_kk'∑_e_f⟨ kem|ρ̂(t)|k'em⟩_f ∑_lB^m_k'le^i(E^m_k'-E^m_l)t' C^m_lk
= -i∑_m=m_0^N∑_rsq^∉ KK'K”∑_e_f⟨ sem|ρ̂(t)|rem⟩_f e^i(E^m_r-E^m_q)t'B^m_rqC^m_qs,
in which ∑_e_f⟨ sem|ρ̂(t)|rem⟩_f=
e^i(E^m_r-E^m_s)tρ^i→ f_sr(m) and all the other matrix
elements are known quantities. Together with the second term (CB) in the anticommutator,
I_2(t+t',t)= -i∑_m=m_0^N∑_rsq^∉ KK'K”∑_e_f⟨ sem|ρ̂(t)|rem⟩_f C^m_rqe^i(E^m_q-E^m_s)t'B^m_qs,
we have the retarded Green's function as follows
G(t+t',t)
= -i∑_m=m_0^N∑_rsq^∉ KK'K”∑_e_f⟨ sem|ρ̂(t)|rem⟩_f (e^i(E^m_r-E^m_q)t'B^m_rqC^m_qs+C^m_rqe^i(E^m_q-E^m_s)t'B^m_qs).
This expression is useful for the non-equilibrium DMFT, which
requires the dynamical fields expressed in terms of two-time Green
functions <cit.>. Fourier
transforming with respect to the time difference t' gives
G(ω+iη,t)
= ∑_m=m_0^N∑_rsq^∉ KK'K”∑_e_f⟨ sem|ρ̂(t)|rem⟩_f {B^m_rqC^m_qs/ω+E^m_r-E^m_q+iη+C^m_rqB^m_qs/ω+E^m_q-E^m_s+iη},
which together with ∑_e_f⟨
sem|ρ̂(t)|rem⟩_f=ρ^i→ f_sr(m)e^-i(E_s^m-E_r^m)t
results in a time-dependent spectral function A(ω,t)=- Im[G(ω,t)]/π that can be evaluated at all
times in a numerically highly efficient manner: only the time-independent
projected density matrix ρ^i→ f_sr(m) together with the
NRG excitations and matrix elements are required to evaluate A(ω,t) at all positive times. While the
above expression looks deceptively similar to the first term in Anders
expression for the positive time Green function in
Eq. (<ref>), this is not the case [note the
different sum ∑_rsq^∉ KK'K” in
Eq. (<ref>) as compared to the sum
∑_rs^∉ KK'…×∑_q∈ (K”D) in the first term of
Eq. (<ref>)]. It includes also
the second term in Eq. (<ref>), involving ρ̃^i→
f_k_2k_1(m',t), but within a different approximation that follows from
the second line of Eq. (<ref>). In this way, the
recursive evaluation of a reduced density matrix depending explicitly on time is
circumvented in our expression, making it numerically tractable. A
similar expression to Eq. (<ref>) has been derived
for initial state density matrices corresponding to either pure states, such as
the ground state, or to decoupled initial states
(Γ=0 in the Anderson model) with or without excitations in the
bath and used to study thermalization in the Anderson model following
such an initial state preparation <cit.>.
As a check on Eq. (<ref>), we can verify that it
reduces to the equilibrium retarded Green function in the case that
H^f=H^i (vanishing quench size). In this limit, using the definition of the full density matrix of the initial
state<cit.>,
ρ̂ ≡∑_l'e'm'_f|l'e'm'⟩_iw_m'e^-β
E^m'_l'/Z̃_m'_i⟨ l'e'm'|,
and of the reduced full density matrix R^m_red(k,k') in
Refs. S_Costi2010,S_Nghiem2014a, we have
∑_e_f⟨
sem|ρ̂(t)|rem⟩_f =∑_e∑_l'e'm'_f⟨
sem|e^-iH^ft|l'e'm'⟩_iw_m'e^-β E^m'_l'/Z̃_m'_i⟨ l'e'm'|e^iH^ft|rem⟩_f
=∑_l'ω_me^-β E^m_l'/Z_mδ_sl'δ_l'r+∑_kk'R^m_red(k,k')δ_skδ_k'r.
Substituting this expression into Eq. (<ref>) for
G(ω,t), we obtain the following time-independent expression
G(ω)= ∑_m=m_0^N∑_lq(ω_me^-β E^m_l/Z_m)×(B^m_lqC^m_ql/ω+E^m_l-E^m_q+iη+C^m_lqB^m_ql/ω+E^m_q-E^m_l+iη)
+ ∑_m=m_0^N-1∑_lkk' R^m_red(k,k')×(B^m_k'lC^m_lk/ω+E^m_k'-E^m_l+iη+C^m_k'lB^m_lk/ω+E^m_l-E^m_k+iη),
which is identical to the expression in the equilibrium case <cit.>.
It is interesting to compare the spectral function obtained from our
Eq. (<ref>) with that of Anders obtained from Eq. (<ref>).
For this purpose, we consider quench (A) of the main text and three
representative times: t=0, t=1/T_ K, and t→ +∞.
From Figs. <ref>(a) and <ref>(c) we see that both
expressions give the same results at t=0 and t→ +∞, while
at finite times small differences arise for |ω|≲ T_
K [see Fig. <ref>(b)]. As we discussed in connection
with Eq. (<ref>) above the main advantage of our new
expression Eq. (<ref>), is that it can be evaluated numerically
very efficiently. By contrast, to date, no results at finite times have
been published using Eq. (<ref>) and only results
for infinite times have been published <cit.>.
§.§ Negative time-dependence (t<0)
At negative times t<0, we can have t+t' before or after the quench. When t+t'<0 or t<-t', we have
G_BC(t+t',t) =-iθ(t')Tr{ρ̂[B̂(t+t'),Ĉ(t)]_s}
=-iθ(t')Tr{ρ̂[e^iH_i(t+t')B e^-iH_i(t+t'),e^iH_itC e^-iH_it]_s}
=-iθ(t')Tr{e^-iH_itρ̂e^iH_it[e^iH_it'B e^-iH_it',C ]_s}
=-iθ(t')Tr{ρ̂[e^iH_it'B e^-iH_it',Ĉ]_s},
which is t-independent, and just corresponds to the equilibrium propagator of the
initial state Hamiltonian (as long as t<-t'). While for t+t'>0 or t>-t', we have
G_BC(t+t',t) =-iθ(t')Tr{ρ̂[B̂(t+t'),Ĉ(t)]_s}
=-iθ(t')Tr{ρ̂[e^iH_f(t+t')B e^-iH_f(t+t'),e^iH_itC e^-iH_it]_s}
=-iθ(t')Tr{e^-iH_itρ̂e^iH_it[e^-iH_ite^iH_f(t+t')B e^-iH_f(t+t')e^iH_it,C ]_s}
=-iθ(t')Tr{ρ̂[e^-iH_ite^iH_f(t+t')B e^-iH_f(t+t')e^iH_it,Ĉ]_s}.
In general, then, we have for the retarded Green function at t<0
G_BC(t+t',t)=
-iθ(t')Tr{ρ̂[e^iH_it'B e^-iH_it',Ĉ]_s} if t+t'<0;
-iθ(t')Tr{ρ̂[e^-iH_ite^iH_f(t+t')B e^-iH_f(t+t')e^iH_it,Ĉ]_s} if t+t'≥ 0,
For the first part (BC term) of the anticommutator at t+t'<0 we have
I^-_1(t+t',t) =-iTr{e^iH_it'B̂ e^-iH_it'Ĉρ̂}
=-i∑_lem_i⟨ lem| e^iH_it'B̂ e^-iH_it'Ĉρ̂|lem⟩_i
=-i∑_rsem^∉ KK'_i⟨ rem| e^iH_it'B̂ e^-iH_it'|sem⟩_i_i⟨ sem|Ĉρ̂|rem⟩_i_[Cρ]^m_sr
=-i∑_rsem^∉ KK' e^i(E^m_r-E^m_s)t'B^m_rs[Cρ]^m_sr.
Next, we consider ∑_e[Cρ]^m_sr, which with
Eq. (<ref>) becomes
∑_e[Cρ]^m_sr =∑_e_i⟨ sem|Ĉρ̂|rem⟩_i=∑_e∑_l_1e_1m_1_i⟨ sem|Ĉ|l_1e_1m_1⟩_i e^-β E^m_l_1/Z̃_m_1w_m_1_i⟨ l_1e_1m_1|rem⟩_i.
In this sum, only the parts with m_1≥ m are finite, and we obtain
∑_e[Cρ]^m_sr =∑_lC^m_sle^-β E^m_l/Z_mw_mδ_lr+∑_kC^m_skR^m_kk'δ_k'r=∑_qC^m_sqR̃^m_qr,
with R_kk'^m, R̃^m_qr as in
Ref. S_Nghiem2014a. Substituting the above into Eq. (<ref>) we obtain
I^-_1(t+t',t)=-i∑_rsm^∉ KK' e^i(E^m_r-E^m_s)t'B^m_rs∑_qC^m_sqR̃^m_qr
Similarly, the first part (BC term) of the anticommutator for t+t'>0 is
given by
I^+_1(t+t',t)=-iTr{e^iH_f(t+t')B̂ e^-iH_f(t+t')e^iH_itĈρ̂e^-iH_it}
=-i∑_lem_i⟨ lem| e^iH_f(t+t')B̂ e^-iH_f(t+t')e^iH_itĈρ̂e^-iH_it|lem⟩_i
=-i∑_rsem^∉ KK'_i⟨ rem|e^iH_f(t+t')B̂ e^-iH_f(t+t')|sem⟩_i_i⟨ sem|e^iH_itĈρ̂e^-iH_it|rem⟩_i_[Cρ(t)]^m_sr
=-i∑_rsem^∉ KK'∑_r_1s_1e_1m_1^∉ KK'_i⟨ rem|r_1e_1m_1⟩_f_f⟨ r_1e_1m_1|e^iH_f(t+t')B̂ e^-iH_f(t+t')|s_1e_1m_1⟩_f_f⟨ s_1e_1m_1|sem⟩_i[Cρ(t)]^m_sr,
where Cρ(t)≡ e^iH_itCρ e^-iH_it (i.e.,
the tilde signifies that the time evolution operators apply to the
composite operator Cρ).
We decompose the above sum into three parts corresponding to m_1>m, m_1=m, and m_1<m, then simplify them as follows
-i∑_rsem^∉ KK'∑_kk'_i⟨ rem|kem⟩_f_f⟨ kem|e^iH_f(t+t')B̂ e^-iH_f(t+t')|k'em⟩_f_f⟨ k'em|sem⟩_i[Cρ(t)]^m_sr
-i∑_rsem^∉ KK'∑_r_1s_1^∉ KK'_i⟨ rem|r_1em⟩_f_f⟨ r_1em|e^iH_f(t+t')B̂ e^-iH_f(t+t')|s_1em⟩_f_f⟨ s_1em|sem⟩_i[Cρ(t)]^m_sr
-i∑_r_1s_1e_1m_1^∉ KK'∑_kk'_i⟨ k'e_1m_1|r_1e_1m_1⟩_f_f⟨ r_1e_1m_1|e^iH_f(t+t')B̂ e^-iH_f(t+t')|s_1e_1m_1⟩_f_f⟨ s_1e_1m_1|ke_1m_1⟩_i[Cρ(t)]^m_1_kk'
= -i∑_em∑_rsr_1s_1^∉ KK'K_1K'_1_i⟨ rem|r_1em⟩_f_f⟨ r_1em|e^iH_f(t+t')B̂ e^-iH_f(t+t')|s_1em⟩_f_f⟨ s_1em|sem⟩_i[Cρ(t)]^m_sr
= -i∑_m∑_rsr_1s_1^∉ KK'K_1K'_1 S^m_rr_1e^i(E^m_r_1-E^m_s_1)(t+t')B^m_r_1s_1 S^m_s_1se^i(E^m_s-E^m_r)t∑_e[Cρ]^m_sr.
Substituting ∑_e[Cρ]^m_sr into the above equation, we have
I^+_1(t+t',t)= -i∑_mrsr_1s_1^∉ KK'K_1K'_1 S^m_rr_1e^i(E^m_r_1-E^m_s_1)(t+t')B^m_r_1s_1 S^m_s_1se^i(E^m_s-E^m_r)t∑_qC^m_sqR̃^m_qr.
The second part (CB term) of the anticommutators for the two parts
at t+t'<0 and t+t'>0, denoted by
I^-_2(t+t',t') and I^+_2(t+t',t'), respectively, can be derived in a similar
way and combining all terms gives the expression for the two-time
Green function G(t'+t,t) at t<0. Fourier transforming with
respect to the time difference t' results in the following
expression for the spectral function at t<0
G (ω,t)=∫_-∞^∞dt' e^i(ω+iη) t' G(t+t',t)
= ∫_0^-tdt' e^i(ω+iη) t' (I^-_1(t+t',t)+I^-_2(t+t',t))+∫_-t^∞dt' e^i(ω+iη)) t' (I^+_1(t+t',t)+I^+_2(t+t',t))
= -i∑_m[∫_0^-tdt' e^i(ω+iη) t'∑_rs^∉ KK' e^i(E^m_r-E^m_s)t'B^m_rs
+∫_-t^∞dt' e^i(ω+iη) t'∑_rsr_1s_1^∉ KK'K_1K'_1 S^m_rr_1e^i(E^m_r_1-E^m_s_1)(t+t')B^m_r_1s_1 S^m_s_1se^i(E^m_s-E^m_r)t]×∑_q(C^m_sqR̃^m_qr+R̃^m_sqC^m_qr)
= ∑_m[∑_rs^∉ KK'B^m_rs/ω+E^m_r-E^m_s+iη(1-e^-i(ω+E^m_r-E^m_s+iη)t)
+∑_rsr_1s_1^∉ KK'K_1K'_1 S^m_rr_1B^m_r_1s_1/ω+E^m_r_1-E^m_s_1+iη S^m_s_1se^-i(ω+E^m_r-E^m_s+iη)t]×∑_q(C^m_sqR̃^m_qr+R̃^m_sqC^m_qr).
From this expression, we easily see that G(ω,t→ -∞)
recovers the initial state Green function. While from the
starting definition, we have G(ω,t→ 0^-) equal to
G(ω,t→ 0^+), in the approximate expressions above this is no
longer guaranteed as a consequence of the NRG approximation
and the different derivations for t→ 0^±. Figures <ref>(a)-<ref>(b) show the spectral functions
at t→ 0^+ and t→ 0^- and quantify the size of the
discontinuity at t=0. While the spectral functions for t→
0^± match to high accuracy at high frequencies |ω|≫ T_
K for both quench (A) [Fig. <ref>(a)] and quench
(B) [Fig. <ref>(b)] of the main text, there is a
mismatch at low frequencies.
In Fig. <ref> we show the negative time spectral
function for quench (A) of the main text using logarithmic axes for both time t (in units
of the initial state Kondo scale T_ K^i=1.2× 10^-3T_ K) and frequency,
ω (in units of the final state Kondo scale T_ K). The data is the same as the
positive frequency data in Fig. 3(a) of the main text, but the use of
a logarithmic frequency axis now makes clearer the
statement made there concerning the structure around the Fermi level,
that “this structure, of width
T_ K^i≪ T_ K at t→ -∞ and satisfying the Friedel
sum rule πΓ A(ω=0,t→-∞)=1, gradually broadens
and acquires a width of T_ K at short negative times ...”.
In addition, Fig. <ref> shows that the low energy structure on a
scale T_ K at short negative times is formed by drawing
spectral weight from both the initial state Kondo resonance (diagonal stripes
at t≳ -1/T_ K^i), and also from the satellite peaks
(starting at t≳ -1/Γ). As for
short positive times [Fig. 1(a) of the main text], the
“preformed” Kondo resonance
at short negative times (t→ 0^-) is seen to have missing states
in the vicinity (ω≪ T_ K) of the Fermi level.
Finally, notice that the relevant time scale for the
“devolution” of the initial state Kondo resonance at t=-∞
is 1/T_ K^i=1.2× 10^-3T_ K.
§.§ Numerical evaluation of A(ω,t)
§.§.§ Positive time spectral function
We first consider the evaluation of the spectral function for positive
times. From (<ref>), we have for
A(ω,t)=-1/π ImG(ω+iη,t)
A(ω,t)
= ∑_m=m_0^N∑_rsq^∉ KK'K”ρ^i→
f_sr(m) cos(E_sr^mt)
{δ(ω-E_qr^m)B^m_rqC^m_qs+δ(ω-E_sq^m)C^m_rqB^m_qs},
+ 1/π∑_m=m_0^N P.V.∑_rsq^∉ KK'K”ρ^i→
f_sr(m) sin(E_sr^mt) [B^m_rqC^m_qs/(ω-E_qr^m)+C^m_rqB^m_qs/(ω-E_sq^m)],
where E_sr=E_s-E_r. The first contribution,
E(ω,t)
= ∑_m=m_0^N∑_rsq^∉ KK'K”ρ^i→
f_sr(m) cos(E_sr^mt)
{δ(ω-E_qr^m)B^m_rqC^m_qs+δ(ω-E_sq^m)C^m_rqB^m_qs},
is evaluated in the usual way by replacing δ(ω-E) by the logarithmic Gaussian
1/√(π)b|E|e^-b^2/4e^-(ln(|w/E|)/b)^2 and
summing over the excitations E <cit.>.
In order to evaluate the second
contribution above, we define the auxiliary function F”(ω,t) via
F”(ω,t)
= ∑_m=m_0^N∑_rsq^∉ KK'K”ρ^i→
f_sr(m) sin(E_sr^mt)
{δ(ω-E_qr^m)B^m_rqC^m_qs+δ(ω-E_sq^m)C^m_rqB^m_qs},
and evaluate this in the usual way. Taking it's principle value
integral then gives the second contribution to the spectral function:
F'(ω,t)
= -1/π
P.V.∫ dω'F”(ω',t)/ω-ω'
= -1/π∑_m=m_0^N P.V.∑_rsq^∉ KK'K”ρ^i→
f_sr(m) sin(E_sr^mt) [B^m_rqC^m_qs/(ω-E_qr^m)+C^m_rqB^m_qs/(ω-E_sq^m)].
To summarize,
A(ω,t) = E(ω,t) - F'(ω,t)
§.§.§ Negative time spectral function
We now consider the evaluation of A(ω,t) for negative
times starting from the expression for the retarded Green function in
Eq. (<ref>). This is a sum of two terms
G(ω+iη,t)=G_1(ω+iη,t)+G_2(ω+iη,t), with
G_1 (ω+iη,t)=∑_m∑_rs^∉ KK'B^m_rs/ω-E^m_sr+iη(1-e^-i(ω-E^m_sr+iη)t){CR̃}_sr^m,
G_2 (ω+iη,t)=∑_m∑_rsr_1s_1^∉ KK'K_1K'_1 S^m_rr_1B^m_r_1s_1/ω-E^m_s_1r_1+iη S^m_s_1se^-i(ω-E^m_sr+iη)t{CR̃}_sr^m,
where
{CR̃}_sr^m≡∑_q(C^m_sqR̃^m_qr+R̃^m_sqC^m_qr).
Correspondingly, the spectral function is also written as a sum of two
parts A(ω,t)=A_1(ω,t)+A_2(ω,t).
Consider first A_1(ω,t)=-1/π Im G_1(ω,t), for finite 0>t>-∞, G_1(ω+iη,t) is
regular, having no poles on the real axis, then A_1(ω,t) is evaluated directly
A_1 (ω,t)=∑_m∑_rs^∉ KK'η/π/(ω-E^m_sr)^2+η^2{1-e^η
tcos[(ω-E^m_sr)t]} B^m_rs{CR̃}_sr^m
-1/π∑_m∑_rs^∉ KK'(ω-E^m_sr)/(ω-E^m_sr)^2+η^2e^η tsin[(ω-E^m_sr)t] B^m_rs{CR̃}_sr^m.
with a finite η=b|E_sr^m| and b≥ 1/N_z where N_z is
the number of bath realizations in the z averaging procedure.
For consistency, we also evaluate G_2(ω,t) in Eq. (<ref>)
with the same Lorentzian broadening and thereby obtain
A_2(ω,t)=-1/π Im G_2(ω,t) and thus
A(ω,t)=A_1(ω,t)+A_2(ω,t).
§.§ Spectral sum rule
The spectral weight sum rule ∫_-∞^+∞ dω A(ω,t)=1
with A(ω,t)=-1/π Im G(ω,t) is exactly satisfied for all positive times within our expression
Eq. (<ref>) and the same holds true for Eq. (<ref>) (see Ref. S_Anders2008b).
Here, we prove that it is exactly satisfied also for all negative
times. With G(ω,t) defined by Eq. (<ref>), we need to evaluate the following terms
I_1(t) =-1/π∫^+∞_-∞ Im[1-e^-(ω+E^m_r-E^m_s+iη)t/ω+E^m_r-E^m_s+iη]dω,
I_2(t) =-1/π∫^+∞_-∞ Im[e^-(ω+E^m_r-E^m_s+iη)t/ω+E^m_r_1-E^m_s_1+iη]dω
with η a positive infinitesimal. Defining E^m_r-E^m_s=E^m_rs and E^m_r_1-E^m_s_1=E^m_r_1s_1, we have
I_1(t) =1/π∫^+∞_-∞η/(ω+E^m_rs)^2+η ^2dω
-1/π∫^+∞_-∞e^η tηcos[(ω+E^m_rs)t]/(ω+ E^m_rs)^2+η ^2dω
-1/π∫^+∞_-∞e^η t(ω+ E^m_rs) sin[(ω+ E^m_rs)t]/(ω+ E^m_rs)^2+η ^2dω
= 1/π×η×π/η - 1/π× e^η tη×π/η e^η t - 1/π× e^η t× (-π e^η t)
= 1 - e^2η t + e^2η t =1,
I_2(t) =1/π∫^+∞_-∞e^η tηcos[(ω+ E^m_rs)t]/(ω+ E^m_r_1s_1)^2+η ^2dω
+1/π∫^+∞_-∞e^η t(ω+ E^m_rs) sin[(ω+ E^m_rs)t]/(ω+ E^m_r_1s_1)^2+η ^2dω
=1/π∫^+∞_-∞e^η tηcos[(ω+ E^m_r_1s_1)t]cos[( E^m_rs- E^m_r_1s_1)t]/(ω+ E^m_r_1s_1)^2+η ^2dω+1/π∫^+∞_-∞e^η t(ω+ E^m_rs) sin[(ω+ E^m_r_1s_1)t]cos[( E^m_rs- E^m_r_1s_1)t]/(ω+ E^m_r_1s_1)^2+η ^2dω
= 1/π× e^η tη×π/η e^η tcos[( E^m_rs- E^m_r_1s_1)t] + 1/π× e^η t× (-π e^η t)cos[( E^m_rs- E^m_r_1s_1)t]
= e^2η tcos[( E^m_rs- E^m_r_1s_1)t] - e^2η tcos[( E^m_rs- E^m_r_1s_1)t]=0,
where use was made of
∫_-∞^+∞cos(tx)/(x^2+a^2)=π e^ta/a and
∫_-∞^+∞xsin(tx)/(x^2+a^2)=-π e^ta (t<0).
Using Eqs. (<ref>-<ref>) and Eq. (<ref>), we have
-1/π∫_-∞^+∞ dω Im[G(ω,t)]=∑_m∑_rs^∉ KK'B^m_rs∑_q(C^m_sqR̃^m_qr+R̃^m_sqC^m_qr)=1.
Therefore the sum rule is proved at t<0. The sum rule also holds for
t=0^-, as can be seen by noting that the last integrals
contributing to I_1(t) and I_2(t) return 0 in this case.
The numerically evaluated negative-time spectral function also satisfies this sum rule to within ≈ 1% for all quench
protocols and all negative times, as shown in
Fig. <ref>(a). In Fig. <ref>(b) we show
w_-(t)=∫_-∞^+∞dω F(ω,t) with
F(ω,t)=A(ω,t) if A(ω,t)<0 and F(ω,t)=0 if
A(ω,t)>0, i.e., the contribution to the total weight coming from
regions of negative spectral density. Regions of negative spectral
weight appear for transient times in many other systems
<cit.>, while in steady state limits
t→±∞ the spectral function is positive definite within canonical
density matrix approaches. We see that both w_-(t) and
the error in the sum rule are both largest in the time region
tΓ≳ -1 where the major part of the spectral weight
(located in the satellite peaks) is being rearranged
from ω=ε_i, ε_i+U_i to ω=ε_f,
ε_f + U_f. The maximum size of w_-(t) correlates with the quench size
Δϵ_d=|ϵ_f-ϵ_i|, and reaches up to 15%
for the largest quench (A) in Fig. <ref>(b).
§.§ Friedel sum rule, thermalization and discretization effects
The TDNRG replaces the continuum conduction electron bath
H=∑_kσϵ_kc^†_kσc_kσ in the
Anderson model by a
logarithmically discretized bath ϵ_k→±Λ^-n whose tight binding representation in
energy space (the so called Wilson chain) is given by
H=∑_n=o,σ^Nt_n(f_nσ^†f_n+1σ+H.c.)
with t_n≈Λ^-(n-1)/2 for n≫ 1 and where
Λ>1 is the discretization parameter. The continuum limit
corresponds to Λ→ 1^+, which is not possible to
take numerically within the iterative NRG diagonalization scheme due
to the increasing slow convergence in this limit. The effect of using such a
discrete Wilson chain on the time evolution of physical quantities
in response to a quench is twofold: (i), incomplete thermalization at
long times after the quench due to the fact that a Wilson chain (even
in the limit N→∞) cannot act as a proper heat bath <cit.> (see also
Refs. S_Anders2006,S_Nghiem2014a,S_Nghiem2014b,S_Weymann2015)
and ,(ii), additional real features appear in the time evolution,
due to the logarithmic discretization of the bath. We address these
two effects in turn.
Incomplete thermalization in the long time limit is reflected in deviations of observables from
their expected values in the final state. These deviations can be
reduced by decreasing Λ, as shown in
Ref. S_Nghiem2014a for the case of the local occupation
n_d(t) in the Anderson model, where n_d(t→+∞) was found
to approach its expected value more closely upon decreasing Λ towards 1. In the present
context of spectral functions, incomplete thermalization is reflected
in a ≈ 15% deviation of
πΓ A(ω=0,t→∞) from the continuum result
sin^2(π n_d/2), where n_d is the occupation number in the final state.
However, this deviation is not an error of the TDNRG but is expected
because of (i). The Friedel sum rule (FSR) only holds for equilibrium states.
In the infinite past, where thermalization issues play no role, and
one achieves the equilibrium state at t=-∞, the FSR is satisfied
to within a 1-3% for all quenches
studied, which is comparable to the accuracy achievable in equilibrium NRG
calculations <cit.> (the spectra for large negative t in
Fig.3(c) and Fig.4(c) of the main text). This demonstrates that the 15%
deviation in the value of πΓ A(ω=0,t→∞) is not
an error of the TDNRG but is the correct result for the Wilson chain used.
We note, furthermore, that other approaches to time dependent spectral densities,
such as the non-crossing approximation <cit.> suffer,
even for equilibrium spectral densities, from much larger errors in
the Friedel sum rule (see Table I in Ref. S_Costi1996b),
and still other methods <cit.> can result in errors
which exceed 50% despite the use of a continuum bath. In the
latter approaches the deviations from the Friedel sum rule represent
real errors in the underlying methods, whereas in the TDNRG, the
deviation observed is that expected from using a logarithmically discretized chain.
The second effect of using a Wilson chain is that additional
features appear in the time evolution of physical observables that
would be absent for a continuum bath. Examples are the small oscillations
seen in Figs .1 and 2 of the main text at low energies |ω|<T_ K.
Physically, these oscillations, or "substructures" result from the highly
nonequilibrium situation created by the quench: following the quench,
the local change in energy has to be transported by electrons
propagating outwards in the process of thermalization. These electrons are
reflected off the different sites (n=0,1,...,N) of the inhomogeneous
Wilson chain (with hoppings t_n∼Λ^-(n-1)/2) arriving
back at the impurity site where they interfere at specific times to give
additional features in the time evolution of local quantities (such as
in A(ω,t)). This was originally explained in great detail by
Eidelstein et al. in Ref. S_Eidelstein2012 for observables
like the occupation number n_d(t). These authors also showed a
comparison between n_d(t) using TDNRG and n_d(t) using exact
diagonalization for a noninteracting resonant level model, both
calculated using the same Wilson chain. The additional features in the time
evolution of n_d(t), not present in the continuum model, were
identified as real effects and not as being due to errors or
unphysical features of the TDNRG method. Thus, the features seen
in Figs.1 and 2 of the main text at low energies |ω|<T_ K
have their origin in the logarithmically discretized bath.
In the remote past, before the quench is able to act, such features are expected to
be absent. In support of this interpretation, we therefore show on a
logarithmic scale the spectral function in the distant past
A(ω,t=-∞) in Figs. <ref>(a)-<ref>b, which
indeed show the absence of all “substructures”. This is to be
compared with the presence of such "substructures" at finite and long
positive times [Figs. 1(b)-1(d) and Fig. 2(c)-2(e) in the main text].
Finally, we mention that since the limit Λ→ 1^+ is not
feasible within NRG, a simpler approach that is useful to obtain
results closer to those of the continuum limit is the averaging
of time-dependent quantities over N_z≫ 1 realizations of the bath
<cit.>.
§ ADDITIONAL RESULTS
Our derivation for the non-equilibrium two-time retarded Green
function at both positive and negative times can be used to derive
expressions for the other commonly used Green functions in many-body
theory, including lesser (and greater) Green functions, and can be used
for arbitrary quenches. In the next
subsections, we show the results for the cases of the reverse of
quench (B) in the main text, i.e. quenching from a symmetric Kondo
into a mixed valence regime, a finite temperature quench as in
Nordlander et al. <cit.>, and a hybridization quench in which
the coupling Γ is switched on at time t=0. We also
show results for the lesser Green function, which together with
the retarded Green function constitute the basic ingredients for many
applications, e.g., to transient and non-equilibrium transport through correlated
quantum dots <cit.>. In
Sec. <ref>, we show the explicit two-time
dependence of the retarded Green function, a basic ingredient in
nonequilibrium DMFT applications <cit.>.
§.§ Symmetric Kondo to mixed valence [reverse of quench (B)]
In this subsection, we show the time-dependent spectral functions
for the case of the reverse of quench (B) in the main text. i.e., from
the symmetric Kondo to the mixed valence regime.
We see in Fig. <ref> how the spectral function evolves
from the spectral function of the initial state at t→ -∞
[Fig. <ref>(c) for tT_ K=-10^3 (tΓ=-10^4)],
with the Friedel sum rule satisfied to within a few % in this
limit, to its value in the long-time limit with a mixed valence peak
close to the Fermi level and a satellite peak at
ω=ε_f+U above the Fermi level
[Fig. <ref>(f) for tT_ K=+10^3 (tΓ=+10^4)].
Similar to the other cases in the main text, the initial state satellite peaks at
ω=±ε_i rapidly relocate at t=-1/Γ [dashed
line in Fig. <ref>(a)] with spectral weight being
shifted to form the upper satellite peak and the mixed valence
resonance, a process which essentially has completed by t=+1/Γ
[dashed line in Fig. <ref>(b)]. The weights of these
peaks weights are also close to those of the final state for
tΓ≳ +1. The central peak which represents the Kondo resonance at t→
-∞ also varies strongly at t≳ -1/Γ, and evolves into the
mixed valence peak by time t≳ +1/Γ.
§.§ Nordlander quench (finite temperature)
In Ref. S_Nordlander1999 a quench is made within the
U=∞ Anderson model via a level shift ε_d(t) =
ε_iθ(-t) + ε_fθ(t) with
ϵ_i=-10Γ (T_ K^i≈ 10^-7) and
ϵ_f=-4Γ (T_ K≈ 1.8×
10^-3). This corresponds to a quench from one asymmetric Kondo
regime to another with disparate Kondo scales; in contrast we
previously investigated quenches in which one of the states was in
a symmetric Kondo regime whereas the other was in an mixed valence regime. The quench in
Ref. S_Nordlander1999 also differs from those studied so far
since it is at a finite temperature T=2.5× 10^-3 such that
T_ K^i≪ T ≈ T_ K. Thus, initially the Kondo resonance
is strongly temperature suppressed whereas in the final state it is only
moderately suppressed by temperature. This
quench can therefore serve to illustrate the application
of our TDNRG formalism for time dependent spectral functions to finite temperatures.
In figure <ref>, we show the time-dependent spectral
function from negative to positive times. The calculations were
carried out for U≫ D to simulate the U=∞ case. We therefore
observe only the satellite peak below the Fermi level in the negative
frequency range, both in the initial and final states.
Similar to the other calculations, the satellite peak rapidly relocates at
t≈ -1/Γ from ε_i to ε_f as shown in Fig. <ref> (a).
At the same time, the spectral function develops small regions of
negative spectral weight, with the total sum-rule remaining satisfied
to within 1% as shown in Fig. <ref> (a).
The central peak at ω=0 is absent at t→ -∞
since the calculation is at finite temperature T≫ T_ K^i.
Since the temperature T≈ T_ K is finite and comparable to
the final state Kondo scale, the Kondo resonance does not fully
develop at long times [Fig. <ref>(b) and <ref>(c)] with
πΓ A(ω=0,t→∞) reaching only about 59% of its T=0 value.
This is better seen in Fig. <ref>(d), which shows a
close up of the low frequency region around the Fermi level.
Nevertheless, despite the finite temperature, one sees the build up of
the Kondo resonance at t≳ 1/T_ K.
§.§ Hybridization quench
In this subsection, we show the time-dependent spectral functions for
the case of a hybridization quench as in
Ref. S_Weymann2015, where the hybridization between the
impurity and the conduction electrons, initially turned off at t<0,
is suddenly turned on at t=0.
In figure <ref> (a)-(b), we see that for this
quench also, low and high energy features are present at all
times. The high energy features correspond to the final state satellite peaks at
ε_f and ε_f+U, whereas the low energy feature
of width on the scale of the final state Kondo temperature T_ K
represents the Kondo resonance. While the former have little
temperature dependence at all t>0, as in Weymann et al.
<cit.>, the latter has significant time dependence,
developing fully only at tT_ K≳ 1
[Fig. <ref>(a)] with weight drawn in from higher
energies in the process. Notice that this
low energy peak appears even at t=0, which is different from Weymann
et al,<cit.> since the broadening parameter is set to be
time-independent in our calculation, while it is time-dependent (and
large of order Γ at t=0) in Weymann et al. <cit.> .
While the strong time dependence of the Kondo resonance can be
seen on a logarithmic frequency scale from Fig. <ref>(a),
it is barely discernible on the linear frequency scale of Fig. <ref>(b).
§.§ Lesser Green functions
We consider explicitly the lesser Green function for the local level in
the Anderson impurity model, defined by
G^<(t+t',t)=i⟨ d^†_σ(t+t')d_σ(t)⟩.
For equal times (t'=0),
G^<(t,t)=i⟨ d^†_σ(t)d_σ(t)⟩=i⟨ n_dσ(t)⟩,
i.e., Im[G^<(t,t)]=n_dσ(t), so the lesser Green function at equal times
gives the time evolution of the local occupation number. Following
the derivation for the retarded Green function in Sec. <ref>,
we similarly obtain the following expression for the lesser Green function
G^<(t+t',t)
= i∑_m=m_0^N∑_rsq^∉ KK'K”∑_e_f⟨ sem|ρ̂(t)|rem⟩_f e^i(E^m_r-E^m_q)t'B^m_rqC^m_qs
G^<(t,t)
= i∑_m=m_0^N∑_rsq^∉ KK'K”∑_e_f⟨ sem|ρ̂(t)|rem⟩_f B^m_rqC^m_qs,
with B≡ d^†_σ and C≡ d_σ. The
time evolution of the occupation number calculated from this
expression by setting t'=0^+ can be compared with that calculated
directly from the thermodynamic observable n_dσ(t)
<cit.>. The two results, shown in
Fig. <ref>, match perfectly at short times
and differ slightly on longer time scales (tΓ≳ 1). This
small difference arises because the NRG approximation enters
differently in the expressions for thermodynamic and dynamic quantities.
§.§ Retarded Green function: explicit dependence on times
From Eq. (<ref>), we can directly evaluate the
dependence of the retarded Green function on its two time
arguments. This is shown for the imaginary and real parts in
Figs. <ref>(a)-<ref>(b) versus the time difference t'>0 and
time t>0. At equal times we see from Fig. <ref>(a) that -
Im [G(t,t))]=1 for all t>0, recovering the canonical
anticommutation relation for fermions, ande hence the spectral
sum rule for t>0. Non-equilibrium DMFT <cit.>
requires impurity Green functions in real time, and the ability to
calculate these within TDNRG, which we here demonstrated, is a useful
first step for future applications to the former.
26
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Weichselbaum and von
Delft(2007)]S_Weichselbaum2007
author author A. Weichselbaum and author J. von
Delft, 10.1103/PhysRevLett.99.076402 journal journal Phys. Rev. Lett. volume 99, pages 076402 (year
2007)NoStop
[Anders(2008)]S_Anders2008b
author author F. B. Anders, 10.1088/0953-8984/20/19/195216 journal journal Journal of Physics-Condensed Matter volume 20, pages 195216 (year 2008)NoStop
[Nordlander et al.(1999)Nordlander, Pustilnik, Meir, Wingreen, and Langreth]S_Nordlander1999
author author P. Nordlander, author M. Pustilnik, author Y. Meir,
author N. S. Wingreen, and author D. C. Langreth, 10.1103/PhysRevLett.83.808 journal journal Phys. Rev. Lett. volume 83, pages 808 (year 1999)NoStop
[Weymann et al.(2015)Weymann, von Delft, and Weichselbaum]S_Weymann2015
author author I. Weymann, author J. von Delft,
and author A. Weichselbaum, 10.1103/PhysRevB.92.155435 journal journal Phys. Rev. B volume 92, pages 155435 (year 2015)NoStop
[Lobaskin and Kehrein(2005)]S_Lobaskin2005
author author D. Lobaskin and author S. Kehrein, 10.1103/PhysRevB.71.193303 journal journal Phys. Rev. B volume
71, pages 193303 (year 2005)NoStop
[Freericks et al.(2006)Freericks, Turkowski, and Zlati ćć]S_Freericks2006
author author J. K. Freericks, author V. M. Turkowski, and author V. Zlati ćć, 10.1103/PhysRevLett.97.266408 journal journal
Phys. Rev. Lett. volume 97, pages
266408 (year 2006)NoStop
[Turkowski and Freericks(2005)]S_Turkowski2005
author author V. Turkowski and author J. K. Freericks, 10.1103/PhysRevB.71.085104 journal journal Phys. Rev. B volume
71, pages 085104 (year 2005)NoStop
[Aoki et al.(2014)Aoki,
Tsuji, Eckstein, Kollar,
Oka, and Werner]S_Aoki2014
author author H. Aoki, author N. Tsuji,
author M. Eckstein, author M. Kollar, author
T. Oka, and author
P. Werner, 10.1103/RevModPhys.86.779 journal journal Rev.
Mod. Phys. volume 86, pages 779
(year 2014)NoStop
[Nghiem and Costi(2014a)]S_Nghiem2014a
author author H. T. M. Nghiem and author T. A. Costi, 10.1103/PhysRevB.89.075118
journal journal Phys. Rev. B volume 89, pages 075118 (year
2014a)NoStop
[Anders and Schiller(2006)]S_Anders2006
author author F. B. Anders and author A. Schiller, 10.1103/PhysRevB.74.245113 journal journal Phys. Rev. B volume
74, pages 245113 (year 2006)NoStop
[Gramsch et al.(2013)Gramsch, Balzer, Eckstein, and Kollar]S_Gramsch2013
author author C. Gramsch, author K. Balzer,
author M. Eckstein, and author M. Kollar, 10.1103/PhysRevB.88.235106 journal journal Phys. Rev. B volume 88, pages 235106 (year 2013)NoStop
[Costi and Zlati ćć(2010)]S_Costi2010
author author T. A. Costi and author V. Zlati ćć, 10.1103/PhysRevB.81.235127 journal journal Phys.
Rev. B volume 81, pages 235127
(year 2010)NoStop
[Bulla et al.(2001)Bulla,
Costi, and Vollhardt]S_Bulla2001
author author R. Bulla, author T. A. Costi, and author D. Vollhardt, 10.1103/PhysRevB.64.045103 journal journal Phys. Rev. B volume 64, pages 045103 (year 2001)NoStop
[Bulla et al.(2008)Bulla,
Costi, and Pruschke]S_Bulla2008
author author R. Bulla, author T. A. Costi, and author T. Pruschke, 10.1103/RevModPhys.80.395 journal journal Rev. Mod. Phys. volume 80, pages 395 (year 2008)NoStop
[Dirks et al.(2013)Dirks,
Eckstein, Pruschke, and Werner]S_Dirks2013
author author A. Dirks, author M. Eckstein,
author T. Pruschke, and author P. Werner, 10.1103/PhysRevE.87.023305 journal journal Phys. Rev. E volume 87, pages 023305 (year 2013)NoStop
[Jauho et al.(1994)Jauho,
Wingreen, and Meir]S_Jauho1994
author author A.-P. Jauho, author N. S. Wingreen,
and author Y. Meir, 10.1103/PhysRevB.50.5528 journal journal Phys. Rev. B volume 50, pages 5528 (year 1994)NoStop
[Freericks and Turkowski(2009)]S_Freericks2009b
author author J. K. Freericks and author V. Turkowski, 10.1103/PhysRevB.80.115119 journal journal Phys. Rev. B volume
80, pages 115119 (year 2009)NoStop
[Rosch(2012)]S_Rosch2012
author author A. Rosch, 10.1140/epjb/e2011-20880-7 journal
journal The European Physical Journal B-Condensed Matter and
Complex Systems volume 85, pages 1
(year 2012)NoStop
[Nghiem and Costi(2014b)]S_Nghiem2014b
author author H. T. M. Nghiem and author T. A. Costi, 10.1103/PhysRevB.90.035129
journal journal Phys. Rev. B volume 90, pages 035129 (year
2014b)NoStop
[Costi et al.(1996)Costi,
Kroha, and Wölfle]S_Costi1996b
author author T. A. Costi, author J. Kroha, and author P. Wölfle, 10.1103/PhysRevB.53.1850 journal journal Phys. Rev. B volume 53, pages 1850 (year 1996)NoStop
[Bock et al.(2016)Bock,
Liluashvili, and Gasenzer]S_Bock2016
author author S. Bock, author A. Liluashvili, and author T. Gasenzer, 10.1103/PhysRevB.94.045108 journal journal Phys. Rev. B volume 94, pages 045108 (year 2016)NoStop
[Eidelstein et al.(2012)Eidelstein, Schiller, Güttge, and Anders]S_Eidelstein2012
author author E. Eidelstein, author A. Schiller, author F. Güttge,
and author F. B. Anders, 10.1103/PhysRevB.85.075118 journal journal Phys. Rev. B volume 85, pages 075118 (year 2012)NoStop
[Hershfield et al.(1992)Hershfield, Davies, and Wilkins]S_Hershfield1992
author author S. Hershfield, author J. H. Davies, and author J. W. Wilkins, 10.1103/PhysRevB.46.7046 journal
journal Phys. Rev. B volume 46, pages 7046 (year 1992)NoStop
[Meir et al.(1993)Meir,
Wingreen, and Lee]S_Meir1993
author author Y. Meir, author N. S. Wingreen,
and author P. A. Lee, 10.1103/PhysRevLett.70.2601 journal journal Phys. Rev. Lett. volume 70, pages 2601 (year 1993)NoStop
[Schmidt and Monien(2002)]S_Schmidt2002
author author P. Schmidt and author H. Monien, @noop journal journal
eprint arXiv:cond-mat/0202046 (year 2002), http://arxiv.org/abs/cond-mat/0202046 cond-mat/0202046 NoStop
[Cohen et al.(2014)Cohen,
Gull, Reichman, and Millis]S_Cohen2014a
author author G. Cohen, author E. Gull,
author D. R. Reichman, and author A. J. Millis, 10.1103/PhysRevLett.112.146802 journal journal Phys. Rev. Lett. volume 112, pages 146802 (year 2014)NoStop
|
http://arxiv.org/abs/1701.07621v1 | 20170126091131 | Frequency-dependent Study of Solid Helium-4 Contained in a Rigid Double-torus Torsional Oscillator | [
"Jaewon Choi",
"Jaeho Shin",
"Euseong Kim"
] | cond-mat.other | [
"cond-mat.other"
] |
[]rappinmind@kaist.ac.kr
[Corresponding author: ]eunseong@kaist.edu
Department of Physics and Center for Supersolid and Quantum Matter Research, Korea Advanced Institute of Science and Technology (KAIST), 291 Daehak-ro, Yuseong-gu, Daejeon 34141, Republic of Korea
The rigid double-torus torsional oscillator (TO) is constructed to reduce any elastic effects inherent to complicate TO structures, allowing explicit probing for a genuine supersolid signature. We investigated the frequency- and temperature-dependent response of the rigid double-torus TO containing solid 4He with 0.6 ppb 3He and 300 ppb 3He. We did not find evidence to support the frequency-independent contribution proposed to be a property of supersolid helium. The frequency-dependent contribution which comes from the simple elastic effect of solid helium coupled to TO is essentially responsible for the entire response. The magnitude of the period drop is linearly proportional to f^2, indicating that the responses observed in this TO are mostly caused by the overshoot of *soft solid helium against the wall of the torus. Dissipation of the rigid TO is vastly suppressed compared to those of non-rigid TOs.
67.80.bd
67.80.de
Frequency-dependent Study of Solid Helium-4 Contained in a Rigid Double-torus Torsional Oscillator
Eunseong Kim
December 30, 2023
==================================================================================================
§ INTRODUCTION
The resonant period of an ideal torsional oscillator (TO) is proportional to the square root of its rotational inertia √(I), and the superfluid decoupling can be detected by the reduction of the resonant period. The resonant period drop of a TO containing solid helium was originally interpreted as the appearance of a supersolid phase <cit.>. Recently, a number of experimental and theoretical efforts have indicated that the anomaly in the TO response can be explained by the shear modulus change <cit.>. The previous finite element method (FEM) simulation suggested that the influence on the period of TO due to the change in shear modulus of solid helium was negligible <cit.>. Nevertheless, the effect can be significantly amplified resulting from non-ideal TO design. Four mechanisms of non-ideal TO response have been suggested <cit.>. In order to minimize the contribution of the elastic effect to the resonant period of TO, it should be meticulously constructed to be rigid. However, most of TOs used in the previous supersolid experiments were not rigid. <cit.>. Recently, the Chan group reported that the resonant period drop was substantially reduced when the rigidity of TO was systematically increased <cit.>. The resonant period drop of a highly rigid TO was only a few times greater than that due to the elastic effect estimated by FEM simulation. They set the upper bound for the non-classical rotational inertia (NCRI) to be less than approximately 4 ppm <cit.>.
However, new evidence of a *true supersolid signature was suggested by Reppy et al. <cit.> based on double-frequency TO experiments. Superfluidity or the NCRI is independent of frequency while non-superfluidity or the relaxation phenomenon resulting from shear modulus change leads to a strong frequency dependence. Accordingly, analysis on the frequency dependence can be used to differentiate the superfluid response from the non-superfluid response <cit.>. Reppy et al. observed a small frequency-independent resonant period change after subtracting the frequency-dependent term and ascribed this to a possible supersolid signature. This interpretation can be questioned since the measurements reported relatively large period drop which can be associated with the non-rigidity of TO.
For this article, we measured the period drop and accompanied dissipation peak using a double-frequency TO that was constructed to be highly rigid to minimize shear modulus effects. The frequency dependence of the rigid double-frequency TO was investigated in various modes of representation. As a result, we elucidate whether or not solid helium-4 exhibits true supersolidity.
§ EXPERIMENTAL DETAILS
KAIST rigid double-torus torsional oscillator (TO) (Fig. <ref>) was carefully designed to minimize the various elastic effects caused by: (1) shear-modulus-dependent relative motion between TO parts (the glue effect) <cit.>, (2) solid helium contained in the torsional rod (the torsion rod hole effect) <cit.>, and (3) solid helium layer grown on a thin TO base plate (Maris effect) <cit.>.
We first assembled every joint in the TO using stainless-steel screws or hard-soldering to diminish elastic effects on the TO period due to the relative motion between different parts. The torsion plate and the torsion rod were rigidly connected by machining the assembly from a single piece of Be-Cu. The torus-shaped TO cell for solid helium was constructed by hard-soldering two pieces of semi-circular copper tubing. The torus was hard-soldered onto thick copper plate and the combined structures were fastened down directly on the torsion plate by four screws.
Second, we used a thick torsion rod and a very thin fill line to prevent the elastic effect of solid helium in the fill line. The change of shear modulus of solid helium-4 contained in the torsion rod can induce the TO period anomaly <cit.>. This effect may be responsible for the period drop observed in the majority of TO experiments. In order to remove this effect, a 1-mm-diatmeter CuNi filling capillary was directly installed on top of the TO cell instead of making a hole through a torsion rod.
Third, the TO cell containing solid helium was hard-soldered to a 3-mm-thick copper plate and was designed so that solid helium in the torus did not have a direct contact to the base plate near the torsion rod. In a cylindrical TO, solid helium was grown directly on its base plate. If the thickness of the base plate is not sufficiently thick, then the period drop due to the change in shear modulus of solid helium can be significantly amplified <cit.>. The contribution of solid helium is greater in the proximity of the torsion rod and when the base plate is thinner. In this study, solid helium confined in a rigid torus channel on the thick copper plate was not expected to exhibit the strong Maris effect.
Finally, we carefully tuned the so-called *overshoot effect caused by direct coupling of elastic properties of solid helium to the TO response. Since solid helium is much softer than the wall of the container, it undergoes additional displacement. The shear modulus change of solid helium not moving in phase with the confining wall can induce the period anomaly without non-linear amplification. The resonant period drop introduced by the overshoot effect is known to be linearly proportional to the square of the TO frequency. For this experiment, we optimized the overshoot effect so that the system was in a regime where the elastic effect is not too small to be detected and not too big to make the TO softer. In this TO, solid helium was located in a torus with a relatively large cross-section (diameter: 5 mm), which allows us to have large mass loading and high resolution for the so-called NCRI fraction. One can effectively reduce this effect by confining solid helium in a thick copper torus with a small cross-section at the expense of diminishing the capability of frequency analysis. This fine-tuning allows us to investigate the effect of the shear modulus on the double-frequency TO and possible supersolidity more clearly.
In addition, the KAIST rigid double-torus TO can be operated at four resonant frequencies. This is possible due to the configuration of our TO which consists of two torus-shaped solid helium containers attached to upper and lower stages. This configuration enables modification of the resonant frequency by loading solid helium into the upper or both tori. We first placed the solid sample in both tori. After collecting the first dataset, the solid helium inside the lower torus was carefully removed at low temperature to avoid damaging the solid sample placed in the upper torus.
Finite element method (FEM) simulations indicates that a 30% change in the shear modulus of solid helium-4 in the upper torus leads to only a 0.93-ns period drop in first (1st) mode (in-phase) and a 1.16-ns drop in the second (2nd) mode (out-of-phase). These simulations correspond to 2.3 × 10^-5 and 1.9 × 10^-4 respectively in the framework of so-called NCRI fraction. The major contribution seemingly comes from the overshoot effect, the relative motion of solid helium with respect to the outer wall.
The empty cell has resonant frequencies of 449.57 Hz and 1139.77 Hz for in-phase and out-of-phase modes respectively. Two pairs of electrodes attached to the lower torus were used to drive and detect the TO response. The additional electrodes installed on the upper plate enables the detection of the amplitude and phase of the upper torus. We confirmed that the phase difference between the two tori was approximately 0 degrees for the in-phase mode and 180 degrees for out-of-phase mode. The mechanical quality factor was approximately 1 × 10^6 at 4.2 K for both modes.
Bulk solid helium-4 samples containing an impurity concentration of 0.6 ppb and 300 ppb helium-3 were grown by the blocked capillary method. The sample cell was first pressurized to target pressures of 67-82 bar at 3.2 K and the mixing chamber was then cooled to the base temperature. The resonant period shifted according to solid growth in the upper torus: 39,800 ns for the in-phase mode and 6,040 ns for the out-of-phase mode. The large period shift due to solid helium and its solid stability in the first mode enables us to distinguish the change of the resonant period within about 2 ppm resolution. This sensitivity is lower than the upper limit of NCRI fraction reported by the PSU group (4 ppm) <cit.>.
§ TO PERIOD AND DISSIPATION
The period and dissipation of the TO for both the in-phase and out-of-phase modes were measured over a temperature range of 20-300 mK. Figure <ref> presents the resonant period of the rigid double-torus TO containing high-purity solid helium-4 (0.6 ppb) and commercial-purity solid helium-4 (300 ppb) as a function of temperatures at different rim velocities. The period drop anomaly was observed for both modes of the rigid double-torus TO.
Compared with the empty cell data (dashed line), the period drop in 0.6-ppb solid sample appears initially at 80 mK, and became saturated below 25 mK for both modes (colored symbols). The magnitude of the period drop is suppressed by the increase in the rim velocity of the alternating current (AC) oscillation for both modes. The suppression of the TO anomaly appears at a rim velocity of 100 μm/s for both modes. Similar behaviors have been observed in numerous previous experiments <cit.>, including the measurement of the rigid TOs <cit.>. The low onset temperature of about 80 mK and sharper temperature dependence of the TO anomaly were reported for low helium-3 impurity concentrations <cit.>. The period values measured at different rim velocities follows the empty cell background at high temperatures for both resonant modes.
The maximum period drop at the lowest rim velocity is 1.43 ns and 1.66 ns for the in-phase (-) and the out-of-phase (+) modes respectively, which are a similar order of magnitude to those expected from the contribution by the shear modulus (0.93 ns for the in-phase mode and 1.16 ns for the out-of-phase mode with a 30% shear modulus change). By subtracting the empty-cell background, we calculated dP_± with respect to the mass loading of solid helium Δ P_±, equivalent to the NCRI fraction, dP_-/Δ P_-=3.5 × 10^-5 for the in-phase mode; dP_+/Δ P_+=2.5 × 10^-4 for the out-of-phase mode. Considering that the maximum change in shear modulus at the lowest temperature can vary from 8% <cit.> to 86% <cit.>, the period drop can be reasonably attributed to the stiffening effect of the shear modulus in solid helium. Dissipation in the TO response is also observed for both modes. The dissipation peak appears around 30 mK at which the period of the TO changes most drastically. The dissipation peak is also suppressed by increasing the rim velocity in both modes agreeing with previous measurements <cit.>.
We observed essentially the same results for the commercial-purity (300 ppb) solid helium-4 sample except for the anomaly found at higher temperatures. In the in-phase mode, the anomalous period drop appears at an onset temperature of 160 mK and reaches a maximum of 1.49 ns (dP_-/Δ P_- = 3.7 × 10^-5) at 40 mK. In the out-of-phase mode, the onset temperature is higher than that of the in-phase mode, about 200 mK. The shifted onset temperature at higher frequency modes has been previously reported from a double-pendulum TO by Rutgers group <cit.> and also in shear modulus measurements in another experiment <cit.>. The magnitude of the period drop saturates to 1.55 ns (dP_-/Δ P_- = 2.6 × 10^-4) at 40 mK. The maximum period drop observed for both modes has a similar order of magnitude as those anticipated by the FEM simulation. The dissipation in TO amplitude appears over the same temperature range. At the lowest rim velocity of 50 μm/s, the dissipation peak was located at approximately 105 mK.
We investigated seven different solid helium samples. The resonant period showed similar order of magnitude drops, approximately 1.3-1.7 ns for both modes. However, only two solid samples described above show dissipation in the TO response. In addition, the size of these dissipation peaks is approximately 10^-7 and is orders of magnitude smaller than those from typical TO experiments. These minute dissipation features are consistent with the results from the Chan group who showed no clear dissipation features in their rigid TO experiments <cit.>. The absence of and/or the significant reduction of dissipations can be connected to a rigidity of the TO.
§ FREQUENCY-DEPENDENT STUDY
The frequency dependence of the TO responses is examined to clarify the origin of the marginal period drop. While dP/Δ P is independent of the measurement frequency in the supersolid scenario, it is proportional to the square of the measurement frequency in the shear-modulus effect scenario <cit.>. Reppy et al. <cit.> provided a simple mathematical methods to decompose the period drop observed in their TO experiment into both frequency-independent and frequency-dependent parts explicitly. The measured period drop consists of two terms: (i) the frequency-independent term [dP_±/Δ P_±]^ind(T,V), regarded as a putative supersolid signature, and (ii) the frequency-dependent term [dP_±/Δ P_±]^dep(T,V,f), attributed to the elastic overshoot effect, where T is temperature and V is rim velocity. The total period drop observed in TO experiments can be written as follows:
[ dP_±/Δ P_±]^exp=[ dP_±/Δ P_±]^ind(T,V)+[ dP_±/Δ P_±]^dep(T,V,f)
Since the period drop originating from the overshoot effect is proportional to f^2, the last term can be substituted with [dP_±/Δ P_±]^dep(T,V,f)=a(T,V)f^2. Then, both the frequency-independent term [dP_-/Δ P_-]^ind and the frequency-dependent term [dP_-/Δ P_-]^dep of the in-phase mode are decomposed as follows:
[ dP_-/Δ P_-]^ind= f_+^2/γ[ dP_-/Δ P_-]^exp-f_-^2/γ[ dP_+/Δ P_+]^exp
[ dP_-/Δ P_-]^dep= (1-f_+^2/γ)[ dP_-/Δ P_-]^exp+f_-^2/γ[ dP_+/Δ P_+]^exp
where γ=f_+^2-f_-^2.
To measure [dP_-/Δ P_-]^ind and [dP_-/Δ P_-]^dep at a certain rim velocity, the driving AC voltage was carefully adjusted to have the same rim velocity for the in-phase and out-of-phase modes. Accordingly, the same color-coded pair of temperature scans for each mode as shown in Figure <ref> was obtained at the same rim velocity, despite of different driving AC voltages.
The temperature dependence of (a) the frequency-independent and (b) the frequency-dependent term at various TO rim velocities are plotted in Figure <ref>. The datasets identified with the same color in both Figure <ref>-(a) and <ref>-(b) are obtained using the same rim velocity. However, the two figures are significantly different. No sizable frequency-independent period drop at any rim velocity can be identified in Figure <ref>-(a), while the frequency-dependent period response at the rim velocity of 47.2 μm/s is nearly the same as the total period drop in the measurements. The averaged frequency-independent period drop is approximately 0±4 ppm, which is similar to the upper limit set in the rigid TO study <cit.>. Accordingly, the majority of the period anomaly can be attributed not to the appearance of supersolidity but to the stiffening of the shear modulus of solid helium-4. Besides, the frequency-dependent term was strongly suppressed with increasing TO rim velocity. In contrast, no apparent drive dependence was observed for the frequency-independent drop. The frequency-dependent TO response can be extrapolated to values less than 4 ppm (0.16 ns) when the stiffening of solid helium is significantly suppressed, indicating that the entire TO anomaly can be ascribed to the shear modulus change of solid helium at low temperatures.
Figure <ref> shows the low temperature dP/Δ P for four resonant modes as a function of frequency. Two solid triangles are measured for solid samples grown in both tori and the other two solid circles corresponded to samples grown only in the upper torus. In the log-log plot of dP/Δ P and f^2, the data can be linearly fitted to the equation log(dP/Δ P )=(-9.74)+(1.003)log(f^2). The slope of equation is nearly 1, indicating that dP/Δ P is linearly proportional to f^2. Converting the log-scale axis to a linear one, the y-intercept is equivalent to -2.62 × 10^-7 (-0.3 ppm) which indicates that the measured period drop originates from the shear modulus effect rather than supersolid mass decoupling.
The frequency dependence can be analyzed by other methods. The ratio between the period shifts of the two modes δ P_+/δ P_- can distinguish the origin of TO response at low temperature: either supersolidity or shear modulus change <cit.>. The ratio for the supersolid scenario (δ P_+/δ P_-)_SS would follow the mass-loading (or missing) change which can be easily obtained by measuring the mass-loading-induced period shifts of both modes. In contrast, the ratio for the shear modulus effect required an additional frequency-dependent contribution as follows:
(δ P_+/δ P_-)_SM=(f_+^2/f_-^2)(δ P_+/δ P_-)_SS
The mass-loading ratio (δ P_+/δ P_-)_SS in our experiment is measured to be approximately 0.152. The solid straight line shown in Figure <ref> indicates mass decoupling or the supersolid scenario estimated by FEM simulation and experimental measurements. The effect from the shear modulus change of solid helium is estimated analytically (the dashed line) and with FEM simulation (the solid line). The slope of δ P_+/δ P_- plot in the shear modulus scenario is steeper than that in the supersolid mass-loading scenario due to the additional frequency-dependent contribution. The measured δ P_+/δ P_- values for both 0.6 ppb and 300 ppb solid helium-4 (Figure <ref>) indicate that the ratio δ P_+/δ P_- collapses to the shear modulus scenario. The ratios obtained in the previous studies lie between shear modulus expectations and supersolid expectations, suggesting the possible existence of a putative supersolid <cit.>. However, we confirmed that the TO responses from the KAIST rigid double-torus TO originated from the non-supersolid origin. The discrepancy with previous double-frequency TO observations may arise from the rigidity of TO.
§ DISCUSSIONS
Recently, Both the Cornell <cit.> and London <cit.> groups constructed a double-frequency TO to investigate the frequency dependence of the period anomaly. The London group measured the period and dissipation of a two-mode TO containing a poly-crystalline solid helium-4 sample. They observed a period drop and concomitant dissipation features, equivalent to dP_-/Δ P_-=2.10 × 10^-3 and dP_+/Δ P_+=8.04 × 10^-3. The torsion rod hole effect and Maris effect was removed by analytical calculations. After fitting a linear equation to those data in f^2-domain, they found a sizable frequency-independent period drop [dP_-/Δ P_-]^ind=1.86 × 10^-3. The period drop and dissipation were analyzed by the complex response function. The TO response they observed was not in agreement with the functional form of simple glassy dynamics. The authors concluded that a different physical mechanism is required for explaining the TO responses.
The Cornell group also designed a compound TO with an annular sample space and measured the TO responses. After removing the overshoot effect by frequency analysis, they extracted a finite frequency-independent period drop [dP_-/Δ P_-]^ind=1 × 10^-4, several orders of magnitude larger than the elastic contribution estimated by their FEM simulation. Additional dissipation introduced by solid helium was measured to be very small. The height of the dissipation peak was reported to only 5 × 10^-8 in both resonant modes. They proposed that the finite frequency-independent period drop could possible be new evidence for supersolidity in bulk solid helium.
In our rigid TO, the frequency-independent period drop is two or three orders of magnitude smaller than the value reported from the Cornell and London group. The dissipation peak is not observed in most solid samples or the size of the dissipation peak was very small (∼10^-7). We believe that this discrepancy is presumably due to TO rigidity. If the ‘true’ NCRI fraction is about 100 ppm, then the period drop anomaly should have been observed in highly rigid TO experiments at PSU and in our rigid double TO experiments.
§ CONCLUSION
We studied the frequency dependence of TO responses of solid helium-4 using the rigid double-torus torsional oscillator. The period drop anomaly is observed for both in-phase and out-of-phase modes. Frequency analysis shows that the frequency-independent period shift is less than 4 ppm, close to the upper limit set by the PSU group, and the frequency-dependent contribution is almost the same as the TO response. We conclude that the TO response at low temperatures is not due to the appearance of supersolidity but due to the change in the shear modulus of solid helium. The supersolid fraction, if it exists, should be smaller than 4 ppm.
The authors acknowledge Duk Y. Kim at Pennsylvania State University for fruitful discussions and advices in the early stage of construction of the rigid double-pendulum torsional oscillator. This work is supported by the National Research Foundation of Korea through the Creative Research Initiatives. J. Choi would like to thank the POSCO TJ Park Foundation for its financial support and generosity through the TJ Park Science Fellowship.
|
http://arxiv.org/abs/1701.07869v4 | 20170126203047 | The First Billion Years project: constraining the dust attenuation law of star-forming galaxies at z $\simeq$ 5 | [
"F. Cullen",
"R. J. McLure",
"S. Khochfar",
"J. S. Dunlop",
"C. Dalla Vecchia"
] | astro-ph.GA | [
"astro-ph.GA"
] |
firstpage–lastpage 2016
Secure SWIPT Networks Based on a Non-linear Energy Harvesting Model
Elena Boshkovska,
Nikola Zlatanov, Linglong Dai, Derrick Wing Kwan Ng, and Robert Schober E. Boshkovska and R. Schober are with Friedrich-Alexander-University Erlangen-Nürnberg (FAU), Germany. Linglong Dai is with Tsinghua University, Beijing, China. L. Dai is supported by the International Science & Technology Cooperation Program of China (Grant No. 2015DFG12760) and the National Natural Science Foundation of China (Grant No. 61571270).
D. W. K. Ng is with The University of New South Wales, Australia. N. Zlatanov is with Monash University, Australia. R. Schober is supported by the AvH Professorship Program of the Alexander von Humboldt Foundation. D. W. K. Ng is supported under Australian Research Council's Scheme Discovery Early Career Researcher Award funding scheme (project
number DE170100137).
Accepted – . Received 2016 May 30
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We present the results of a study investigating the dust attenuation law at z≃ 5, based on synthetic spectral energy distributions (SEDs) calculated for a sample of N=498 galaxies drawn from the First Billion Years (FiBY) simulation project.
The simulated galaxies at z≃ 5, which have M_1500≤ -18.0 and 7.5 ≤log(M/M_⊙)≤ 10.2, display a mass-dependent α-enhancement, with a median value of [α/Fe]_z=5 ≃ 4 × [α/Fe]_Z_⊙.
The median Fe/H ratio of the simulated galaxies is 0.14±0.05 which produces steep intrinsic UV continuum slopes; ⟨β_i⟩ = -2.4 ± 0.05.
Using a set of simple dust attenuation models, in which the wavelength-dependent attenuation is assumed to be of the form A(λ) ∝λ^n, we explore the parameter values which best reproduce the observed z=5 luminosity function (LF) and colour-magnitude relation (CMR).
We find that a simple model in which the absolute UV attenuation is a linearly increasing function of log stellar mass (A_1500=0.5×log(M/M_⊙) - 3.3), and the dust attenuation slope (n) is within the range -0.7 ≤ n ≤-0.3, can successfully reproduce the LF and CMR over a wide range of stellar population synthesis model (SPS) assumptions, including the effects of massive binaries.
This range of attenuation curves is consistent with a power-law fit to the Calzetti attenuation law in the UV (n=-0.55).
In contrast, curves as steep as the Small Magellanic Cloud (SMC) extinction curve (n=-1.24) are formally ruled out.
Finally, we show that our models are consistent with recent 1.3mm ALMA observations of the Hubble Ultra Deep Field (HUDF), and predict the form of the z≃5 IRX-β relation.
galaxies: dust - galaxies: high redshift - galaxies: evolution -
galaxies: star-forming
§ INTRODUCTION
Tracing the star-formation rate of galaxies across cosmic time, and particularly at z ≳ 3 (prior to the peak epoch of cosmic star formation), remains one of the most important focuses of observational cosmology, key to shedding light on the build up of stellar mass and metal content at early times, and clarifying the process of cosmic reionization.
However, despite the substantial progress that has been made in constraining the star-formation rate density (SFRD) out to z≈8 <cit.>, the lack of far-infrared (FIR) data for galaxies at z ≳ 3 means that, at these redshifts, star-formation rate estimates are mainly derived solely from rest-frame ultraviolet (UV) to optical observations.
Therefore, our knowledge of the SFRD at z ≳ 3 is affected by inherent uncertainties regarding the correct dust corrections to apply at high redshifts.
When only rest-frame UV data are available, the observed UV luminosities must be dust-corrected based on correlations derived at lower redshifts.
Most commonly, the <cit.> (M99) IRX-β relation (IRX=L_IRL_UV) is used to estimate the intrinsic UV luminosity from the observed UV continuum slope β (f_λ∝λ^β).
This relation was derived from a sample of local starburst galaxies and has been shown to apply for star-forming galaxies out to z ≃ 2 <cit.>.
Under the assumption that the dust heating is dominated by a young stellar population, IRX can be related directly to the attenuation in the UV <cit.>, allowing one to relate the absolute UV attenuation to the observed shape of the UV continuum slope, assuming the intrinsic shape is known.
The form of the subsequent A_UV-β relation measured for local starbursts is what one would expect for a relatively grey reddening law, similar to that derived by <cit.>.
Steeper laws such as the Small Magellanic Cloud (SMC) extinction curve <cit.>, or laws which include a strong 2175 Å absorption feature <cit.> are not compatible with the local starburst galaxies.
Therefore, when applying the <cit.> IRX-β relation at z ≳ 3, one is implicitly assuming Calzetti-like reddening.
Furthermore, in cases where rest-frame UV plus optical data is available, the intrinsic UV luminosities are typically derived via detailed SED-fitting <cit.>, and the subsequent UV dust-correction depends on which reddening laws are assumed for the fitting.
Again, and motivated in part by its compatibility with the local IRX-β relation, by far the most common assumption is the <cit.> reddening law.
Thus, this assumption is implicit in our current understanding of the UV luminosity function and SFRD evolution at high redshifts.
However, with the advent of the Atacama Large Millimeter Array (ALMA), the form of the attenuation law at high redshifts has begun to be questioned.
Recent ALMA observations have indicated that the IR luminosities of `typical' star-forming galaxies at z ≃ 3 are lower than expected assuming a Calzetti law, and have been interpreted as evidence that the attenuation curve is as steep, or steeper, than the SMC extinction curve at these redshifts <cit.>.
Similar results have also previously been reported for young galaxies (< 100 Myr old) <cit.> and sample of gravitationally lensed galaxies <cit.> at these redshifts.
For a steeper, SMC-like, law a given amount of reddening in the UV is achieved with less absolute attenuation (i.e. the A_UV-β relation becomes much shallower).
If dust reddening at high redshift is SMC-like, then previous UV luminosity corrections made assuming the <cit.> IRX-β relation, or <cit.>, will have overestimated the dust attenuation, and hence overestimated the intrinsic luminosities and SFRs at z ≃ 3.
On the other hand, direct measurements of the attenuation curve at high redshifts have yet to find strong evidence for the steep SMC-like curves inferred from IR measurements.
For example, <cit.> found a Calzetti-like attenuation curve shape at λ<2600Å for a sample of N=224 star-forming galaxies at 1.4 < z < 2.6.
Using an independent method, <cit.> found an average attenuation curve at 1.90<z<2.35 similar to the Calzetti curve, and perhaps even greyer at lower masses, with a tentative detection of a mass dependence of the attenuation curve slope.
Furthermore, <cit.> performed a similar analysis with a sample of N=266 galaxies at 2.0 < z < 6.5 and again found no evidence for a deviation from a Calzetti-like attenuation curve.
Interestingly, the <cit.> analysis is the only example of a direct attenuation curve measurement at the same redshifts probed by recent ALMA observations, and does not support the existence of a steep SMC-like curve.
In light of these recent results, the primary motivation of this paper is to use state-of-the-art hydrodynamical simulations of galaxies at z=5, from the First Billion Years project (FiBY) <cit.>, to constrain the form of the attenuation curve, by comparing the SEDs of simulated galaxies to the luminosity function (LF) and colour magnitude relation (MR) at z=5.
This specific redshift was chosen primarily because it is the highest redshift at which there is good consistency between measurements of the LF and CMR across a variety of independent studies <cit.>; furthermore, it is at this approximate redshift that a number of recent ALMA observations have been made claiming evidence for an evolution of dust properties in the early Universe.
The structure of this paper is as follows: in Section <ref> we describe how we extracted the relevant stellar data for galaxies in our sample from the FiBY simulation, and outline the method we employed to generate synthetic SEDs, including a description of SPS models and photoionization model assumptions. In Section <ref> we review the intrinsic properties of our synthetic SEDs and describe how these properties are affected by our stellar population synthesis (SPS) and photoionization model choices. In Section <ref> we describe the two dust models we have employed to constrain various properties of the attenuation law, before presenting the results of fitting our sample of simulated galaxies to the observed LF and CMR in Section <ref>. In Section <ref> we discuss the IR properties of z=5 galaxies predicted by our models and compare these to recent ALMA observations. Finally, in Sections <ref> and <ref> we provide a discussion and summary of our main conclusions.
The FiBY simulation adopts the following cosmological parameters, that we will also adopt throughout this paper: Ω_n =0.265, Ω_b = 0.0448, Ω_Λ =0.735, H_0 =71 km s^-1 Mpc^-1 and σ_8=0.81.
§ GENERATING SYNTHETIC SEDS
In this section we discuss how the synthetic SEDs were generated from the FiBY simulation data, including a detailed discussion of our SPS and photoionization modelling assumptions, and an overview of the physical properties of our final simulated galaxy sample.
§.§ SPS models
To construct the SED for each galaxy we used the latest release of `Binary Population and Spectral Synthesis' <cit.>.
The BPASSv2 models were chosen because, along with conventional single-star models, they also incorporate the evolution of massive stars in binary systems.
Observations in the local Universe have shown that a potentially substantial fraction (≳ 70%) of massive stars undergo a binary interaction (such as mass transfer or a merger) during their lifetimes <cit.>.
Moreover, recent studies of galaxies at z ≃ 2-3 indicate that the BPASSv2 models are best able to replicate the observed nebular and stellar properties of star-forming galaxies at these redshifts <cit.>.
We have checked that, for the purposes of this work, BPASSv2 single star models are equivalent to the more commonly adopted STARBURST99 models <cit.>.
In the context of this paper, the main effect of including massive binary evolution in SPS models is to generate harder ionizing spectra, particularly at low metallicities, and to boost UV output at older stellar ages relative to single-star populations.
We considered four sets of models which we refer to according to the upper mass cutoff of the initial mass function (IMF) and whether or not binary evolution is included.
Each galaxy is assigned four separate SEDs corresponding to each of the four models.
BPASSv2-100bin models include binary star evolution with an IMF cutoff of 100 M_⊙, and BPASSv2-100 are the equivalent single-star evolution models; similarly, BPASSv2-300bin models include binary evolution with an IMF cutoff of 300 M_⊙, and BPASSv2-300 are the equivalent single-star evolution models.
All models have an IMF index of -1.3 between 0.1 - 0.5 M_⊙ and -2.35 above 0.5 M_⊙ and cover the following metallicities: Z_* = 0.001, 0.002, 0.003, 0.004, 0.006, 0.008, 0.010, 0.014, 0.020, 0.030, 0.040.
§.§ FiBY simulation data
Here, we give a brief summary of the FiBY simulations but refer the reader to other papers for a more detailed description <cit.>.
The FiBY simulation suite is a set of high-resolution cosmological hydrodynamics simulations using a modified version of the GADGET code used in the Overwhelmingly Large Simulations (OWLS) project <cit.>.
These simulations reproduce the stellar mass function and star formation rate of galaxies at z≥ 6 (Khochfar et al. in prep.), and at the same time also recover the right trends in the metallicity evolution of galaxies (Dalla Vecchia et al. in prep.).
Furthermore, the simulations recover that low mass galaxies can reionize the Universe <cit.>, and that supernovae-feedback can drive the formation of cores in galaxies <cit.>.
The code tracks metal pollution for 11 elements: H, He, C, N, O, Ne, Mg, Si, S, Ca and Fe and calculates the cooling of gas based on line-cooling in photoionization equilibrium for these elements <cit.> using tables pre-calculated with CLOUDY v07.02 <cit.>.
Furthermore, the simulation incorporates full non-equilibrium primordial chemistry networks <cit.> including molecular cooling functions for H_2 and HD.
Star formation is modelled using the pressure law implementation of <cit.>, which yields results consistent with the Schmidt-Kennicutt law <cit.>.
The threshold density above which stars form is set to n = 10 cm^-3.
The simulations include feedback from stars is by injecting thermal energy in the neighboring particles <cit.>.
For each Pop II supernova 10^51 erg is injected once a star particle has reached an age of 30 Myr, corresponding to the maximum lifetime of stars that end their lives as core-collapse supernovae.
In this work we will focus on the FiBY_L and FiBY_XL simulations which cover co-moving volumes of (16 Mpc)^3 and (32 Mpc)^3 respectively (≈ 3.7 × 10^4 Mpc^3 combined comoving volume).
The individual gas and star particle masses in the simulations are log(M/M_⊙) = 5.68 for the FiBY_XL and 4.81 for the FiBY_L.
To build the spectral energy distributions of galaxies at z=5, halos at this redshift were identified in the simulation using the SUBFIND algorithm <cit.>.
Then, for each halo, we extracted the masses, ages and metallicities of each star particle.
The mass of each star particle is defined as the total initial mass as opposed to the current mass (i.e. at z=5). since the effects of mass-loss with stellar age are accounted for in the BPASSv2 stellar population synthesis models we use to generate the SEDs.
The age of each star particle was calculated using the formation redshift and assumed cosmological parameters.
Finally, we extracted the mass in metals for each star particle; for reasons which will be described in Section <ref>, we tracked both the total mass in metals and the mass in Fe.
To ease computation time, we binned star particles with ages >20 Myr into bins of width 10 Myr, taking averages of mass, age and metallicity for each bin; star particles with ages ≤ 20 Myr were stored individually.
To build a galaxy SED from a given model we looped over each star particle associated with the galaxy and assigned the stellar spectral energy distribution which best matched the age and metallicity of the particle.
Since the BPASSv2 stellar SEDs are modelled as an instantaneous starburst with a total mass of 1 x 10^6 M_⊙, we additionally scaled the flux to the mass of the star particle.
We constructed the final galaxy SED by summing the SEDs of all the individual star particles.
For our final sample, we selected halos at z=5 with an absolute UV magnitude magnitude M_1500≤ -18.0 where M_1500 was calculated by integrating the rest-frame BPASSv2-100bin SED within a top-hat filter of width 100 Å at a central wavelength of 1500 Å.
This magnitude limit differs slightly according to the assumed SPS model, the limiting value of M_1500 = -18.0 for the BPASSv2-100bin model corresponds to -17.89, -18.08 and -17.95 for the BPASSv2-100, BPASSv2-300bin and BPASSv2-300 models respectively.
However, changing the SPS model for which the magnitude cut is defined does not have a significant effect on the physical properties of our sample, or any of the results presented in this paper.
The final sample contains N=498 galaxies; the distributions of current mass (i.e. at z=5), star-formation rate (SFR) (averaged over the past 100 Myr), and specific star-formation rate (sSFR ≡ SFR / M_*) of our z=5 sample are shown in Fig. <ref>.
The maximum galaxy mass in our sample is 1.4 x 10^10 M_⊙, with a median value of 2.3 x 10^8 M_⊙; the sample is mass complete down to 2.6 x 10^8 M_⊙.
The star-formation rates range from 0.3 - 60 M_⊙yr^-1 with a median of 1.1 M_⊙yr^-1.
The median sSFR is 4.8 Gyr^-1.
§.§.§ Stellar metallicity
The BPASSv2 models assume solar element abundance ratios, however it is not obvious that the abundance ratios at z=5 should be solar; in particular, given the typical star-formation histories at z=5, there could be differences in the Fe/H and α/H ratios relative to solar values.
The star-formation histories of the galaxies in the FiBY simulation are typically rising (Khochfar et. al. in prep), consistent with other observations and simulations at z>2 <cit.>.
For rising (or constant) star-formation histories the rate of enrichment of α elements from Type II (core-collapse) supernovae, which has a typical time scale of ∼ 10 Myr, will exceed that rate of Fe enrichment, since Fe is mainly returned to the ISM via SNe Ia, which have a typical timescale of ∼ 1 Gyr.
Therefore, rapidly star-forming galaxies are expected to have enhanced α/Fe ratios relative to the solar value.
Indeed, it is not strictly necessary to appeal to star-formation histories to understand this offset at z=5, since the median age of the oldest star particle across our galaxy sample is 0.74 Gyr, with a maximum of 0.94 Gyr, implying that the majority of galaxies have not had time to undergo significant SNIa enrichment, regardless of their star-formation histories.
As we discuss below, this α/Fe enhancement can have a significant effect on the shape of the UV spectrum.
We were able to calculate the detailed abundance ratios of all galaxies in our sample because the mass fractions for 11 individual elements (H, He, C, N, O, Ne, Mg, Si, S, Ca and Fe) are tracked in the FiBY simulations, along with the total mass fraction in all metals.
Fig. <ref> shows the distributions of UV-weighted Fe metallicity and total metallicity of the star particles (hereafter referred to, for simplicity, as the α element metallicity), relative to solar, for the N=498 galaxies in our sample, where the solar mass fractions have been taken from <cit.>.
Consistent with the rising star-formation histories, the distributions in Fig. <ref> (left-hand panel) show the difference between the α and Fe-based metallicities.
The median α/Fe ratio in our sample is ≃ 3.9 times larger than the solar value, with the median Fe and α metallicities being Z_Fe/Z_⊙ = 0.14 and Z_α/Z_⊙ = 0.53 respectively.
The right-hand panel of Fig. <ref> shows that, under both definitions, metallicity is an increasing function of stellar mass, however Z_α/Z_⊙ shows a stronger evolution, implying that the α/Fe ratio is also an increasing function of stellar mass.
Non-solar α/Fe ratios have an important effect on the details of the emergent SED.
Recently, <cit.> performed detailed stellar population synthesis and photoionization modelling of the composite rest-frame UV and optical spectra of 30 star-forming galaxies at z=2.4 with log(M/M_⊙)>9.0.
They found that simultaneously fitting (i) the shape of the stellar spectrum in the UV, and (ii) the observed UV + optical emission line ratios, implied a difference of a factor ≃ 4 - 5 between the stellar and nebular metallicities (Z_*/Z_⊙≃ 0.1 and Z_neb/Z_⊙≃ 0.4 - 0.5).
This is consistent with the offset in our sample (Fig. <ref>); in fact, restricting our sample to galaxies above 10^9 M_⊙ yields a median α/Fe ratio of 4.2.
As argued by <cit.>, this offset is naturally explained if star-formation histories at these redshifts are typically rising <cit.>, and by the fact that stellar opacity in the UV is dominated by Fe, whereas the nebular metallicity is indicative of the abundances of the important coolants in ionized gas (e.g O, N, C), and hence traces the α elements.
To a first approximation, any differences between stellar and nebular metallicities is therefore plausibly a result of the same α/Fe offset illustrated in Fig. <ref>; the lower stellar metallicity is driven by the sensitivity of the stellar UV spectrum to Fe/H, whereas the nebular metallicity follows α/H.
Taken together, all these considerations imply that, when applying SPS models with solar abundance sets to galaxies in which one suspects there will be an enhanced α/Fe ratio, the stellar metallicity should be defined as Fe/H, rather than the mass fraction of all metals as is commonly assumed, at least when studying the properties of the UV spectrum.
Ideally, one would use SSPs with non-solar abundance sets however, to our knowledge, there is no publicly available SPS code which accounts for both binary star evolution and also allows non-solar element abundances.
As emphasized by <cit.>, this should be a key goal for any future SPS models.
To summarize, as a consequence of the above considerations, we used the mass fraction in Fe, relative to solar, rather than the total mass fraction in metals, to select the appropriate metallicity BPASSv2 model when constructing the galaxy SEDs.
The effect of this choice on the UV SED is illustrated in Fig. <ref>, which shows the average composite BPASSv2-100bin spectra of all N=73 galaxies with log(M/M_⊙) ≥ 9.0, assuming both Z_*=Z_Fe and Z_*=Z_Tot, and will be discussed further in Section <ref>.
§.§ Photoionization models
Since the BPASSv2 models only account for the stellar component of each spectrum, we separately modelled the nebular contribution for each galaxy.
The nebular continuum is a result of free-free, free-bound and two-photon emission from regions in the galaxy.
To model the nebular continuum and emission spectrum we use the photoionization code Cloudy <cit.>.
We ran Cloudy assuming a plane-parallel geometry and considered the following four key input parameters: (i) the shape of the incident ionizing radiation field, (ii) the metallicity of the nebular gas, (iii) the electron density and (iv) the ionization parameter.
Of these, we directly set (i) and (ii) from the FiBY simulation data.
For the shape of the incident radiation field we input the BPASSv2 stellar continuum of each galaxy, thus the nebular spectrum is modelled by assuming the entire galaxy acts as a single region.
Following the discussion in Section <ref>, we used the UV-weighted alpha-element stellar metallicity to set the nebular abundances, scaling to the <cit.> solar abundance set.
This explicitly assumes that the gas phase metallicity traces the metallicity of the most recently formed stars.
To set appropriate values of electron density and ionization parameter at z=5 we used recent near-IR spectroscopic observations of star-forming galaxies at z≃2-3 which have illustrated that the typical values of these parameters are evolving from z=0 out to higher redshifts <cit.>.
For example, <cit.> have presented a detailed analysis of N∼ 380 galaxies at z∼2.3 and concluded that the typical value of electron density is ≃ 300 cm^-3 <cit.>, with ionization parameters in the range -3.1 < log(U) < -2.5, consistent with the best-fitting value of log(U) = -2.8 from <cit.> and again representing an increase on typical SDSS values <cit.>.
We made the assumption that the average z∼2.3 values of these parameters are appropriate to use for our z=5 galaxies based on the following reasoning.
Firstly, though the range of masses and SFRs of our sample are not representative of the galaxies from <cit.> (which have medians of 1.0 x 10^10 M_⊙ and 24 M_⊙yr^-1, factors ∼ 40 and 20 larger respectively) the typical values of sSFR are within a factor 2 (4.8 Gyr^-1 for our sample compared to 2.4 Gyr^-1), and one of the key results presented in <cit.> is that the sSFR is correlated with the degree of excitation and the evolution in optical line ratios they observe.
Unfortunately, no explicit relations between sSFR and either log(U) or n_e exist in the literature to our knowledge, and we do not attempt to second guess the potential evolution of these parameters at z ≃ 5.
Secondly, the metallicities of our sample (both Fe/H and α/H) follow their best-fitting values at z ≃ 2 - 3 in the relevant mass range.
Given the similarities in sSFRs and metallicities, we made the assumption the other physical conditions in regions at z=5 are also plausibly equivalent.
Therefore, we adopted an electron density of n_e = 300 cm^-3 and assumed log(U)=-2.8.
We briefly discuss, in Section <ref>, how our results are affected under the assumption that physical conditions in regions continue to evolve towards even more extreme values of these parameters at z ≳ 3.
The effect of adding the nebular continuum to the UV spectrum is illustrated in Fig. <ref> for a typical galaxy in our sample (M_* = 2.8 x 10^8 M_⊙; SFR = 1.3 M_⊙yr^-1; sSFR = 4.6 Gyr^-1; Z_Fe/Z_⊙ = 0.06).
In this particular example we show a BPASSv2-300bin model SED; the median nebular contribution over the wavelength range within which we measured β slopes (1268 < λ < 1950 Å) is ≃ 6 %, and the overall effect of the nebular continuum is to make the spectrum redder (i.e. increase the value of β).
We note that although nebular emission is included in our photoionization modelling, nebular emission lines have no effect on our method of constraining the form of the attenuation law, therefore in Fig. <ref> we do not show emission lines, since only the continuum is relevant in this context.
A detailed discussion of the effect of the nebular continuum on β across the whole sample is given in Section <ref>.
To summarize, we have modelled the nebular continuum of all galaxies in our sample by using the BPASSv2 stellar SEDs to define the shape of the incident spectrum, assumed the nebular metallicity to be equal to the UV-weighted total stellar metallicity of the galaxy, and used values for electron density and ionization parameter typical of star-forming galaxies at z≃ 2-3.
Therefore, for each galaxy, we have both the intrinsic stellar spectrum, and the combined stellar + nebular spectrum.
As we will discuss later in the paper, the pure stellar spectrum provides a useful limit for our study of plausible attenuation curves, since it represents the `maximally blue' SEDs of galaxies at z=5 from the FiBY simulations, based purely on their star-formation histories and metallicities.
§ INTRINSIC Β AND M_1500 DISTRIBUTIONS
As will be discussed in detail in Sections <ref> and <ref>, in order to constrain the form of the dust attenuation curve, we have compared our simulations to two observed properties of z=5 galaxies: (i) the UV continuum slope (β) and (ii) the volume density of galaxies as a function of their absolute magnitude at λ = 1500 Å (M_1500).
In this section we describe the intrinsic values we measured for these parameters from the synthetic SEDs, as well as discussing how the various different assumptions we have made (e.g. stellar metallicity, nebular modelling) systematically affect these observables.
§.§ UV continuum slope (β)
The UV continuum slope of a galaxy is defined as a power-law expressed as
f_λ∝λ^β
where β is the power-law index <cit.>.
In the absence of dust, the intrinsic UV continuum is sensitive to the age and metallicity of the stellar population, with older and more metal rich populations having redder continuum slopes.
§.§.§ Intrinsic stellar UV continuum
To measure β for our synthetic SEDs we performed a linear fit to logf_λ versus logλ over the wavelength range 1268 < λ < 2580, masking out stellar and interstellar absorption features using the windows defined in <cit.>, we refer to this as a `spectroscopic' β measurement.
As we will describe in Section <ref>, we used β slopes of z=5 galaxies reported in <cit.> as our observational comparison.
These authors measured β by fitting a power-law to five photometric bands which sampled the rest-frame UV spectrum in the wavelength range λ≈ 1500 - 2600 Å.
To ensure consistency, we checked that our spectroscopic β measurements were consistent with the <cit.> photometric method.
We found that both methods are consistent for idealized data (see Fig. <ref>), with the spectroscopic β measurements systematically smaller by a median of Δβ = 0.03 across the whole sample.
For our sample, we chose to use the spectroscopically-measured β values rather than explicitly mimic observational methods.
Fig. <ref> shows how the intrinsic stellar UV continuum (β_i) of our sample varies as a function of the UV-weighted age (defined as age weighted by the luminosity at 1500Å of each star particle) and the UV-weighted Fe-based metallicity (Z_Fe).
We note that the data in Fig. <ref> are taken from the SEDs generated with the BPASSv2-100bin models, however, although the normalization of β_i differs slightly, the trends are the same across all SPS models.
As expected, the value of β_i increases with increasing UV-weighted age and UV-weighted metallicity.
Moreover, we find that, at z=5, β_i is more strongly correlated with the UV-weighted age within the range 10 - 80 Myr probed by our sample, with the scatter in β_i, at fixed UV-weighted age, ≈ 1/3 × the scatter at fixed metallicity.
β_i also correlates positively with mass-weighted age, although with a larger dispersion due to the mass-weighted age probing older stellar populations which contribute less to the UV SED.
For the BPASSv2-100bin model SEDs shown in Fig. <ref>, the median β_i across all galaxies is -2.60, for the other SPS models the medians are -2.55, -2.63 and -2.58 for BPASSv2-100, BPASSv2-300bin and BPASSv2-300 respectively (see Table <ref>).
Independent of SPS model, the dispersion of β_i values is σ_β_i≈ 0.08.
Given the correlation between metallicity and β_i shown in Fig. <ref>, it is clear that adopting Fe-based rather than total metallicities for generating stellar SEDs can have a strong impact on the shape of the UV continuum (see also Fig. <ref>).
We find that the offset between β_i, Fe and β_i, Tot is an increasing function of stellar mass, consistent with the trend shown in the right-hand panel of Fig. <ref>.
For the BPASSv2-100bin model the offset can be as large as δβ_i≈ 0.3 at the highest masses, with a median of 0.13; the median offsets across all other models are 0.02, 0.23 and 0.09 for BPASSv2-100, BPASSv2-300bin and BPASSv2-300 respectively.
This emphasizes the importance, in future work (including SED fitting to observations) of adopting the correct stellar metallicity when fitting the UV continuum, which, as we have previously discussed, is not equal to the total mass-weighted metallicity in the case of variable α/Fe ratios.
§.§.§ Effect of the nebular continuum
As illustrated in Figs. <ref> and <ref>, the nebular continuum has a non-negligible effect on the UV continuum shape of individual galaxies.
Here we discuss the effect of the nebular continuum on β_i across our full sample.
For the remainder of this paper, β refers to the UV continuum slope of the galaxies including the nebular continuum.
Fig. <ref> shows the relationship between β and β_i across all four BPASSv2 SPS models.
The addition of the nebular continuum acts to redden the intrinsic stellar slopes (i.e. leads to larger values of measured β).
The effect is strongest for the galaxies with the steepest (or bluest) stellar UV continuum slopes, or, equivalently, the galaxies with the lowest metallicities and youngest UV-weighted ages.
This is unsurprising since young, low metallicity stellar populations generate harder ionizing spectra and hence a stronger nebular continuum.
For the bluest stellar continua the effect can be as large as Δβ = 0.4, but decreases to Δβ≈ 0.05 at β_i≈ -2.35, with a medians of Δβ≈ 0.15 - 0.25 across all models.
In other words, the effect of the nebular continuum on β is of the order ≈ 2 - 15 %, with an average of ≈ 10 %.
It is also evident from Fig. <ref> that, while the difference between binary and single-star models is minor, Δβ is systematically larger for the BPASSv2-300 models.
At its maximum, the magnitude of this effect represents a difference in Δβ, at fixed β_i, of ≈ 0.05; interestingly, this is large enough to mean that while the median β_i is bluer for the BPASSv2-300 compared to the BPASSv2-100 models, the situation is reversed for the β medians (see Table <ref>).
Finally, across our sample, including the nebular continuum has the effect of reducing the scatter in intrinsic β values by a factor ≈ 2 (see Table <ref>) and significantly flattens the relations between β and metallicity and UV age (Fig. <ref>).
The median β across all SPS models is ≈ -2.40, and we find that the minimum value of β in the case of no dust attenuation at z=5 is β_min = -2.50.
This theoretical dust-free β limit is consistent with the work of <cit.> and <cit.> at z = 6 - 8 and, as we will return to later in the paper, significantly different to the value of β_min = -2.23 assumed for the <cit.> IRX -β relation.
§.§ UV continuum magnitude
The UV continuum magnitude (M_1500,i) is primarily sensitive to the star-formation history of a galaxy over the last 10 - 100 Myr <cit.>.
As mentioned in Section <ref>, we calculated the intrinsic absolute UV magnitude magnitude (M_1500,i) by integrating the rest-frame SEDs over a top-hat filter of width 100 Å at a central wavelength of 1500 Å.
Since our sample was defined by requiring an intrinsic stellar magnitude (i.e. dust free, no nebular contribution) M_1500,i≤ -18.0, the faintest galaxies across all SPS models have roughly this magnitude (see Section <ref>); the (median, maximum) M_1500,i across all models are (-18.88, -23.25), (-18.77, -23.16), (-18.99, -23.36) and (-18.87, -23.26) for the BPASSv2-100bin, BPASSv2-100, BPASSv2-300bin and BPASSv2-300 models respectively.
In the ≈ 3.7 × 10^4 Mpc^3 volume, we do not find any galaxies intrinsically brighter than M_1500,i≈ -23.4.
As with the β values, the choice of stellar metallicity also affects the intrinsic stellar M_1500,i distributions, in the sense that a galaxy with a low SPS model metallicity will be brighter in the UV, due to reduced photospheric line blanketing.
Again the effect is strongest for the binary models and models with a higher mass IMF cutoff, but not strong enough to affect the results presented in this paper; for the BPASSv2-300 models, the median offset in M_1500,i is 0.23 mag.
Finally, the effect of the nebular continuum on M_1500,i is also small; across all models, adding nebular continuum increases the brightness of the SEDs by ≲ 0.1 mag.
§ MODELLING THE DUST ATTENUATION
Having defined the intrinsic properties of simulated galaxies at z=5, we now turn to exploring models for dust attenuation, which should map these intrinsic properties to their observed values.
In particular, we are interested in whether deviations from a Calzetti-like attenuation curve are required to achieve this match, since evidence (both observed and simulated) for a steeper attenuation curve, more like the SMC extinction curve, at high redshifts has been reported in the recent literature <cit.>.
To do this, we considered two simple models for the dust attenuation: (i) a total birth-cloud extinction model (TEx) and (ii) a <cit.>-like model (CF2000) which we describe in detail below.
§.§ TEx
The total birth-cloud extinction model (TEx) assumes that, below a critical age, star particles are completely embedded in their birth clouds and suffer total extinction, while at larger ages either their birth clouds have been photo-evaporated by OB star associations, or the OB associations have drifted away or been blown-out of the birth-cloud, and hence the star particles suffer only the global ambient ISM attenuation of the galaxy.
Local estimates of the timescale for molecular cloud destruction are of the order ∼ 10 - 30 Myr <cit.>, values which are in good agreement with current predictions from hydrodynamical simulations <cit.>; while estimates of the timescales for the ejection of OB associations from their birth clouds are of the order 1 - 3 Myr <cit.>, consistent with the youngest ages of galactic OB associations <cit.>.
Given these observations, we chose to model the following six birth cloud age thresholds t_BC = 0, 1, 3, 5, 10 and 15 Myr.
The upper limit of 15 Myr is chosen to be representative of the timescale for molecular cloud dispersion, and to mirror the work of <cit.> who find, using a similar model, that applying this age threshold successfully reproduces the shape of the UV spectrum of the z=2.73 Lyman-break galaxy 1512-cB58 <cit.>.
The reddening (in magnitudes) as a function of wavelength in this model can be written as,
A(λ) =
∞ if t ≤ t_BC,
ϕλ^n if t > t_BC,
where ϕ is a linear function in log(M/M_⊙) given by:
ϕ = a_1log_10(M/M_⊙) + a_0.
In this prescription, the overall normalization (or `amount' of dust) is a linearly increasing function of the logarithm of stellar mass.
It is well established in the local Universe that effective dust optical depth increases with stellar mass <cit.>; we adopt a linear model, both for simplicity, and because there is no strong evidence at high redshift to favour a more complex functional form.
We modelled the wavelength dependence of the attenuation as a power-law with exponent n; this simple parameterization is a good approximation to many observationally derived attenuation curves which don't include a 2175 Å `bump' as illustrated in Fig. <ref>.
It is worth mentioning that the results of this paper are not dependent of the presence or size of the 2175 Å `bump' feature, since the <cit.> windows are selected to mask out the 2175 Å region, and <cit.> have demonstrated that their observationally derived β values are also insensitive to this feature.
§.§ CF2000
The CF2000 model follows the prescription of <cit.>, with the addition of a mass-dependent normalization (ϕ) as above.
In this model the wavelength-dependent optical depth experienced by a star particle of age (t) is given by,
A(t, λ) =
ϕλ^n if t ≤ t_BC,
μϕλ^n if t > t_BC,
where μ defines the fraction of the total dust optical depth contributed by the ambient ISM, n is the exponent of the reddening curve, ϕ is the overall normalization and t_BC is the age of transition between birth cloud and ambient ISM optical depth.
Again we parameterized the normalization (or `amount' of dust) ϕ as a linear function of the logarithm of stellar mass as in Eq. <ref>.
The difference here is that, instead of suffering total extinction, stars within their birth cloud suffer additional attenuation with respect to stars in the ambient ISM of a galaxy.
Using a sample of local starburst galaxies, <cit.> estimate μ≈ 1/3, n ≈ -0.7 and t_BC≈ 10 Myr, however, as will be described in more detail below, we kept all three parameters free when attempting to fit our simulation data to the observed LF and CMR.
§ FITTING TO THE LF AND CMR
In this section we present the results of using these two dust models to fit to the observed UV galaxy luminosity function and CMR at z=5.
The focus of this section is to constrain the dust model parameters (specifically the reddening slope n) before, in Section <ref>, we discuss how the predictions of our best-fitting dust models compare to the observed IR properties of high-redshift galaxies.
§.§ TEx
For the TEx model, the mass normalization ϕ was derived by directly fitting to the z=5 luminosity function of <cit.>.
The method is illustrated in Fig. <ref>: for each birth-cloud age and SPS model, the sample was split into six bins of M_1500, the number density of galaxies in each bin was determined, and the value of A_1500 required to match the luminosity function calculated.
A value of A_1500 = 0 was assigned to bins in which the number density fell below the observed relation.
We were then able to fit the relationship between A_1500 and the median stellar mass in each bin.
The inset panels of Fig. <ref> show linear fits to A_1500 versus stellar mass (bins with A_1500 = 0 were excluded from the fitting).
Using this relation we can then rewrite Equation <ref> for t > t_BC as,
A(λ, M) = A_1500(M) (λ/1500)^n,
where M ≡ log(M/M_⊙), leaving the slope of the attenuation law (n) as the only model parameter to be fitted.
To constrain the slope of the attenuation law, the CMR of our sample was compared to the observed CMR of <cit.> by varying the value of n, applying Equation <ref>, and re-calculating β and M_1500.
For a giving mass normalization, the change in β can be expressed as,
Δβ = β_obs - β_int = 1.297 (A_1268 - A_2580),
where A_1268 and A_2580 are the attenuations at the wavelengths limits of the <cit.> windows.
The value of n was varied over the range -1.50 < n < 0.00, in steps of 0.01, to encompass the range of n values consistent with observed attenuation/extinction curves (see Fig. <ref>).
For each value of n, the sample was binned into six bins of M_1500, and the total χ^2 was then computed,
χ^2 = ∑_i(model(i) - data(i)/σ(i))^2,
where the summation is over all bins and σ(i)=0.1 is taken as the intrinsic β scatter at the faint end of the CMR estimated by <cit.>, and is representative of the β distributions given in Table <ref>.
The results of this fitting procedure are illustrated in Fig. <ref> and the best-fitting n values and their corresponding χ^2 values are listed in Table <ref>.
Figs. <ref> and <ref> show data in which the nebular continuum correction as described in <ref> has been applied; however, for completeness, in Table <ref> we also report the best-fitting values without applying the nebular correction, this provides a useful comparison for the limiting case of no nebular contribution (i.e. a 100% escape fraction, and the bluest possible UV slopes).
The results show that, across all SPS models, TEx dust models with birth-cloud ages > 5 Myr, are ruled out at >2σ confidence (χ^2 > χ^2_min + 4).
Combining this with the luminosity functions shown in Fig. <ref>, we can further constrain the birth-cloud age to be ≲ 3 Myr, since at ages ≥ 3 Myr the intrinsic volume density of galaxies at M_1500 > -19.0 falls below the <cit.> z=5 LF.
Formally, across all SPS models, the best-fitting solutions corresponds to either the 0 or 1 Myr models, with an average best-fitting values of n_neb=-0.46 for the nebular corrected spectra, and n_stellar=-0.59 for the stellar continuum only models.
In general, the nebular-corrected spectra require greyer reddening slopes because the UV continuum slopes are already reddened with respect to the intrinsic stellar spectrum; however, this distinction gradually disappears at t_BC≥ 10, as the strength of the nebular continuum decreases, and the best-fitting n values converge.
Finally, we find no distinction, even at the 1σ level (Δχ^2 = 1), between the different SPS models.
Taken all together, the results suggest a birth cloud age in the range 0 ≤ t_BC < 3, and a attenuation curve slope in the range -0.6 ≤ n ≤ -0.4.
It is clear that, for the TEx model, `greyer' reddening laws with Calzetti-like slopes are favoured over steeper SMC-like slopes.
From Table <ref> it can be seen that, in general, the steepness of the best-fitting reddening slope increases with increasing birth cloud age, however even at t_BC = 10 - 15 Myr the values are in the range -0.8 ≲ n ≲ -1.0, still below the SMC extinction-law value of n_SMC≈ -1.24.
The increase in n with t_BC occurs because, as the birth cloud age increases, the intrinsic SEDs become fainter in M_1500, thus a smaller A_1500 is required to fit to the luminosity function, and a steeper n is required to simultaneously redden the SEDs to match the CMR.
Although this effect is partially compensated for by the intrinsic SEDs becoming redder at larger t_BC, the compensation is not sufficient to keep the reddening law as grey as a Calzetti law.
It is possible that increasing the birth cloud age further would lead to SMC-like values of n, however this is really a moot point since, at t_BC≥ 3 Myr, the data become incompatible with the observed LF.
Indeed, an acceptable match to the observations can only be achieved if the UV output from stars ≲ 3 Myr old is visible in the integrated spectra of galaxies; therefore, OB star associations should be able to drift away from, or photo-evaporate, their parent molecular clouds within this timescale.
Such a scenario is consistent with the youngest ages of OB star associations observed in the Milky Way <cit.>.
In summary, comparing the results of the TEx model to the observed LF and CMR at z=5 leads us to conclude that: (1) no satisfactory match is achieved unless the UV output from stars ≲ 3 Myr old is visible in the integrated UV spectra of galaxies; (2) the models show no evidence for a departure from Calzetti-like attenuation laws; and (3) there is no strong preference for binary or single-star BPASSv2 models.
§.§ CF2000
To fit the CF2000 model we applied a more detailed analysis, performing a χ^2 minimization across all parameters as listed in Table <ref>.
This allowed us to get a better estimate of the formal error on the attenuation curve slope by marginalizing of all free parameters.
The ranges of parameter values were chosen after some experimentation using coarser step sizes.
For each combination of parameter value, we calculated the optical depth as a function of wavelength for the birth cloud (t ≤ t_BC) and ambient ISM (t > t_BC) as given by Equation <ref>, and reddened the spectrum accordingly.
We binned the reddened sample into six bins of M_1500 and performed a χ^2 fit to both the CMR and LF.
For the fit to the luminosity function, we defined the error in each bin as the Poisson error on the number counts in that bin.
The best-fitting model parameters were determined via χ^2 minimization, where the χ^2 value for each set of model parameters was taken as the sum of the CMR and LF values (i.e. χ^2=χ_LF^2+χ_CMR^2).
From the resulting χ^2 we constructed marginalized probability density functions for each parameter.
Fig. <ref> shows the LF and and CMR for the model parameters at the minimum χ^2 (see Table <ref> for the corresponding parameter values).
The figure shows the case including nebular continuum emission but the values for the case of stellar-only SEDs (i.e. maximally blue) are also given in the table.
From Table <ref> it can be seen that acceptable fits can be achieved for both stellar-only and stellar + nebular models.
The insets in Fig. <ref> show the normalized probability density functions for the slope of the reddening curve, p(n).
Reassuringly, the p(n) curves are consistent with the formally best-fitting n values from our χ^2 grid.
The formally best-fitting reddening slopes are again slightly greyer than the Calzetti law for models including a nebular continuum; however, within the estimated 1σ uncertainties of all models, the range of acceptable n values is -0.7 ≤ n ≤ -0.3, consistent with both <cit.> and <cit.>, but incompatible with a steep reddening law resembling the SMC extinction curve.
Furthermore, the results are fully consistent with the acceptable range of n values (-0.6 ≤ n ≤ -0.4) obtained from the simpler TEx model.
Overall, across the TEx and CF2000 models, the majority of formal best-fitting values of n are marginally greyer than a Calzetti law.
This is not unprecedented, since evidence for reddening curves shallower than Calzetti has been found in some direct observational studies of normal star forming galaxies at z≃2 <cit.>.
However, given the SPS and nebular continuum modelling uncertainties, it is not possible to say definitively that a shallower law is favoured by our data.
The combined analysis does, however, definitively rule out attenuation curves steeper than n ≲ -0.7 (-0.9) at 1σ (2σ) (see Fig <ref>).
Crucially, we find no evidence for an attenuation curve in the UV as steep as the SMC extinction curve (n=-1.24), we will return to this issue in Section <ref>.
§.§.§ Other CF2000 parameters
Although the focus of this analysis is on the slope of the reddening curve, the constraints placed on the other CF2000 model parameters were also investigated.
We found best-fitting μ values of either 0.3 or 0.4 depending on the SPS and nebular assumptions, and values within 1σ uncertainties in the range 0.1 < μ < 0.8.
This is consistent with the original <cit.> results of μ≈ 1/3 with a large scatter.
For the birth cloud age (t_BC) we find formally best-fitting values in the range 11-13 Myr, however statistically-acceptable solutions at any value of t_BC in the full parameter range (0-15 Myr) i.e. the birth-cloud age is not constrained by fitting to the LF and CMR within the CF2000 parameterization.
Finally, we find values of optical depth versus mass slopes (dA_1600/dM) in the range 0.2 < dA_1600/dM < 0.7, with a best-fitting value of dA_1600/dM≃ 0.5.
Of course, there is a degeneracy between these parameters since they all directly affect the change in M_1500 which is mainly constrained by fitting to the LF.
§.§ Recommended dust model parameters
Despite the relatively large range of parameter values over which acceptable can be found, we find that a <cit.> type model, with parameter values not too dissimilar from the original values quoted, provides a simple and adequate framework for describing the dust attenuation law at z ≃ 5 from the FiBY simulation.
In the interest of future work, we provide a list of our recommend values based on the above analysis in Table <ref>.
Given that a Calzetti-like curve is fully consistent within the 1σ uncertainties of the model, we recommend the power-law index n=-0.55 for consistency with this commonly adopted attenuation curve.
However, as this slope is only fit to the Calzetti curve at λ < 2600 Å (Fig. <ref>), for attenuation curves extending from UV to optical we recommend using the <cit.> relation explicitly or, given its similarity, the <cit.> curve.
In addition to the values listed in Table <ref>, we recommend the following relation between A_1500 and log(M/M_⊙), which provides the normalization of the attenuation curve:
A_1500 = 0.5 ×log(M/M_⊙) - 3.3.
This relation implies A_1500=0 for log(M/M_⊙)≤ 6.6, and that, for an order of magnitude increase in mass, dust attenuation at 1500Å increases by 0.5 mags.
§ DUST MODEL PREDICTIONS
In this section we discuss the infrared (IR) predictions of our best-fitting dust models, and how these compare to the latest ALMA observations at z ≃ 5.
Since we have derived the dust attenuation models by comparing the simulated SEDs to observed UV properties, it is useful to ask whether these same models are consistent with the, admittedly sparse, IR data at the same redshift, which trace the dust emission.
In addition, we can use our models to make predictions for future IR surveys.
The total IR luminosity () of each SED was calculated by applying the best-fitting dust model, subtracting the reddened SED from the intrinsic SED, and integrating the difference across all wavelengths i.e.:
L_IR = ∫_912^∞ l_λ, i - l_λ dλ,
where l_λ, i is the intrinsic SED, and l_λ is the reddened SED.
This integral converges at an upper wavelength limit of λ≈ 10,000 Å.
Throughout this section we include the IR predictions of each SPS model for both the best-fitting TEx and CF2000 dust model, and for both the nebular continuum and stellar continuum only cases (i.e. a total of 16 models encompassing the various SED and dust prescriptions).
§.§ ALMA HUDF z=5 source
Recently, <cit.> (D17) have presented a 1.3-mm ALMA mosaic covering the full 4.5 arcmin^2 of the Hubble Ultra-Deep Field (HUDF).
Their final sample consists of 16 sources with point-source flux densities S_1.3 > 120 μJy at a mean redshift ⟨ z ⟩ = 2.15.
Interestingly, one source in their sample (UDF12) has a spectroscopic redshift of z=5.00 <cit.>, and therefore provides a useful comparison to the number counts expected over a similar area from our simulations.
UDF12 has a stellar mass log(M/M_⊙) =9.6±0.12, and a sSFR =9.29±4.80, placing it in the high mass and high sSFR tail of our sample (see Fig. <ref>).
§.§.§ 1.3mm number counts
To derive S_1.3, first was converted into a SFR_IR using the same relation adopted by D17 <cit.>, and S_1.3 was calculated using the conversion given in D17: S_1.3 (in μJy) ≈ 3.3 × SFR_IR (in M_⊙yr^-1).
The number counts as a function of S_1.3 are shown by the histograms in Fig. <ref>.
Depending on model assumptions, we find that between N ≈ 3 - 10 galaxies have flux densities above the threshold S_1.3 > 120 μJy within the ≈ 3.7 × 10^4 Mpc^3 volume of our simulation, or equivalently we find a volume density of N(S_1.3 > 120 μJy) ≈ (1-3) × 10^-4 Mpc^-3.
The number counts are larger for the models which include a nebular contribution due to the additional luminosity provided by the nebular continuum and emission lines in these SEDs, leading to an increase in the output .
Over the 4.5 arcmin^2 HUDF area, the comoving volume between 4.5<z<5.5 is ≈ 1.3 × 10^4 Mpc^3, therefore, to a rough approximation, the simulation volume is ≈ 3 × the HUDF volume at z=5.
Scaling our number counts by this factor results in a prediction of N ≈ 1 - 3 galaxies with S_1.3 > 120 μJy within the HUDF ALMA mosaic, consistent with the observations.
Clearly, a comparison to number counts over such a small area of sky is hindered by cosmic variance, nevertheless these numbers provide some reassurance that we are not seriously under or over-predicting infrared flux densities.
§.§.§ 1.3mm flux density versus UV continuum slope
In Fig. <ref> we directly compare observed quantities tracing the absolute attenuation (S_1.3) and dust law (β) of UDF12.
UDF12 has a 1.3mm flux density of S_1.3=154±40 <cit.> and a UV continuum slope of β=-1.70±0.12, where β was measured by fitting a power-law to the I_814, z_850, Y_105, J_125 and H_160 photometric data points, consistent with the value obtained using a standard SED fitting procedure, in which the intrinsic SEDs are modelled as a power-laws with varying slopes (β=-1.71).
Fig. <ref> shows that the S_1.3 and β values extracted from the best-fitting dust models are consistent with the observation.
Although only ≈ 1 % of our sample have comparable flux densities (S_1.3≥ 120 μJy), all fall within < 2σ of UDF12 and the majority fall within < 1σ.
Fig. <ref> also demonstrates that, for the majority of models, the flux densities of the S_1.3≥ 120 μJy sources are biased high with respect to the average S_1.3 of galaxies with similar β slopes.
There is no significant difference between the TEx and CF2000 models, or between the various SPS assumptions; UDF12 is consistent with the 1σ scatter on the running averages of all models.
In summary, the simple best-fitting dust models with a Calzetti-like reddening law slope (-0.7 ≤ n ≤ -0.3) derived from UV observations are simultaneously compatible with the infrared number counts and properties of sources in the ALMA HUDF image.
In other words, for at least this z≃5 ALMA source, the models account for the both the absolute attenuation and reddening across the UV-IR wavelength baseline.
However, although this is clearly encouraging, larger sample sizes at z>5 are needed in order to perform a more robust comparison.
§.§ The z=5 IRX-β relation
As discussed in the introduction, the <cit.> (M99) IRX-β relation, and the underlying, implicit, A_UV-β relation, has been commonly used to dust correct UV data at high redshifts, and has recently been used to argue that reddening at z>5 follows a SMC extinction-like curve, rather than a Calzetti curve <cit.>.
In this section we explore the IRX-β, and underlying A_UV-β, relations implied by our best-fitting models.
§.§.§ IRX -β relation
<cit.> define IRX as the ratio of far infrared flux (F_FIR) to flux at 1600Å (F_1600), where F_1600 is defined as a generalized flux of the form F_λ=λ f_λ, and f_λ is the flux density per wavelength interval.
In their derivation of IRX_1600, they include a term (BC(FIR)_Dust) to convert the total bolometric IR flux to a FIR flux.
In this analysis we work with luminosities, rather than fluxes, and ignore this BC(FIR)_Dust correction term since we can calculated the total bolometric IR luminosities of our SEDs.
In this case IRX_1600 can be written as
IRX_1600≡L_IR/L_1600 = (10^0.4A_1600-1) ∫_912^∞ l_λ', idλ'/L_1600,i.
Here the first term gives the fraction of L_1600,i absorbed by dust and A_1600 is the attenuation at 1600Å.
For a given dust model, A_1600 was calculated by differencing the magnitudes at 1600Å of the intrinsic SED and the attenuated SED, where the magnitudes were calculated using a top-hat filter centered on 1600Å of width 100Å.
The second term is the ratio of the maximum amount of of luminosity available for dust heating (l_λ', i is the unattenuated luminosity of the SED) divided by the intrinsic 1600Å luminosity.
Assuming the second term is a constant, this equation implies a direct transformation between the attenuation at 1600Å and IRX.
Then, since for a given attenuation curve the attenuation at 1600Å is related to the observed β slope, the position of galaxies in the IRX-β plane can be used to infer the properties of the attenuation curve <cit.>, although one must be wary of the crucial dependence on the intrinsic β slopes as we will discuss below.
In <cit.>, the second term in the equation, denoted BC(1600)_*, was calculated using 100 Myr duration constant star-formation rate SPS models from <cit.>.
They found 1.56 ≤BC(1600)_* ≤ 1.76, adopting a final value BC(1600)_* = 1.66 ± 0.15.
Other authors have adopted the value of 1.75 ± 0.25 derived later by <cit.> <cit.>.
It is worth comparing this to the BC(1600)_* values of our SEDs, across all SPS models.
For the range of BPASSv2 models adopted in this paper, the median BC(1600)_* across all SEDs was calculated for the stellar + nebular and stellar-only cases.
For the models including nebular continuum we find 1.70 ≤BC(1600)_* ≤ 1.78, and for stellar continuum only models we find 1.42 ≤BC(1600)_* ≤ 1.52.
The full range of values is consistent with original derivation of <cit.> but emphasizes how large an effect the nebular continuum can have.
Interestingly, the BC(1600)_* values derived from SEDs including nebular continuum are in much better agreement with previous estimates <cit.>.
For the remainder of this paper we adopt BC(1600)_* = 1.75.
The IRX equation then simply becomes:
IRX_1600 = 1.75 (10^0.4A_1600-1),
and the relationship between IRX and β becomes dependent only on the A_1600 - β relation.
§.§.§ A_1600-β relation
If one assumes that all sources have the same intrinsic UV slopes (i.e. a constant β_i), then for a given reddening law slope (n) and a simple homogeneous star/dust mixture geometry (e.g. similar to the TEx model), there is a simple mapping between A_1600 and β given by combining equations <ref> and <ref>.
For a Calzetti-like reddening law (n=-0.55) this becomes,
A_1600, Calzetti = 2.09 (β+β_i),
whereas for an SMC-like law (n=-1.24) the slope of the relation is much shallower,
A_1600, SMC = 0.99 (β+β_i).
However, if there is an underlying correlation between β_i and A_1600 (i.e. β_i is not assumed to be constant across the population), then these relations do not hold.
Indeed, we find that more massive galaxies, with older UV-weighted ages, have larger values of both β_i and A_1600, such that there is a general β_i - A_1600 correlation in our sample.
This correlation is much stronger for the stellar-only models, since adding nebular continuum significantly reduces the range of β_i values (see Table <ref>).
The situation is further complicated for `clumpy' geometries like CF2000, since for these models Equation <ref> does not apply, and so simple relations like equations <ref> and <ref> above cannot be derived.
The upshot of all this is that the slope of the A_1600-β relation, derived from our models, does not simply map to the power-law slope of the reddening law as described above.
Furthermore, the values of β_i in the resulting fits are representative of the minimum values of β_i of the SEDs, rather than the median value across the population.
Nevertheless, these equations are good approximations in the case of a small scatter in intrinsic UV slopes.
§.§.§ IRX - β for the best-fitting dust models
We derived A_1600-β and IRX - β relations for our sample for both the stellar + nebular and stellar-only cases; the results are shown in Figs. <ref> and <ref>.
For each case we performed a linear least-squares fit to the best-fitting β and A_1600 values from our models (combining the data across both dust models and all SPS models), and substituted the resulting fit into Equation <ref> to give the corresponding IRX -β curve.
The best-fitting A_1600-β relations are as follows (see Fig. <ref>), for the stellar + nebular case:
A_1600 = 2.10 (β + 2.52),
and for the stellar-only case:
A_1600 = 1.28 (β + 2.83),
In both cases, the scatter about the relations is σ=0.14.
The relation for the stellar + nebular case is virtually identical to assuming a Calzetti attenuation law for sources with an intrinsic β_i=-2.52 (see Equation <ref>).
In the stellar-only case, the slope of the relation is significantly shallower, this is a result of both the steeper best-fitting attenuation curves, and the stronger correlation between β_i and best-fitting A_1600.
The effect of the nebular continuum is substantial, up to ΔA_1600≈ 0.5 at the extreme ends of β values.
The scatter of σ=0.14 around the A_1600-β relation derived above only represents the scatter about the best fitting models, to get a more robust estimate from our models we estimated A_1600-β based on the 1σ limits on n (-0.7 < n < -0.3; see Fig. <ref> and Table <ref>).
Taking these limit into account, we estimate the following best-fitting A_1600-β relation for the stellar + nebular case,
A_1600 = 2.10^+1.8_-0.3(β + 2.52).
In reality, a nebular continuum contribution is expected in star-forming galaxies and, though it will likely vary for individual galaxies, we recommend Equation <ref> as the more reliable estimate of the average A_1600-β relationship at z=5.
Futhermore, this equation is reassuringly similar to observational constrainsts based on of samples of Lyman-beak galaxies at z ∼ 3 - 6, especially in terms of the intrinsic UV slopes at these redshits <cit.>.
One immediate consequence of this is that dust corrections to the UV, based on the <cit.> IRX -β relation, will underestimate A_1600 for an observed β, by a median of ≈ 0.6 across the range of observed β values -2.5 < β < -1.6, equivalent to an underestimation of the resulting star-formation rate of a factor ≈ 2.
Finally, in Fig. <ref> we show the predicted position of z ≃ 5 galaxies in the IRX-β plane along with a variety of data from the literature at z>3 <cit.>.
It can be seen that the best-fitting line (using Eq. <ref>) is roughly consistent with the shape of the original <cit.> curve with a shift to bluer intrinsic UV slopes of Δβ_i≃ - 0.2.
Again, this is fully consistent with typical z≃5 star-forming galaxies experiencing, on average, Calzetti-like attenuation, similar to local star-forming galaxies, yet having bluer intrinsic UV slopes based on the typically rising star-formation histories at these redshifts.
It can be seen from Fig. <ref> that our predictions are in agreement with some data at high redshift <cit.>, and again are consistent with the ALMA HUDF <cit.>.
However, some tension clearly exists with other recent ALMA observations as we explore in more detail below.
§.§.§ IRX-β discrepancy with <cit.>
It is clear from Fig. <ref> that there is a significant discrepancy between the predictions of our simulations and the observed positions of the z≈5 galaxies from <cit.> in the IRX -β plane <cit.>.
As discussed previously, under the assumption that the BC(1600)_* factor is constant (1.75), then the position of galaxies in the IRX -β plane is dependent, to first order, only on (i) the slope of the reddening curve and (ii) the intrinsic UV continuum slopes (β_i).
Therefore, in an attempt to explain this, it is worth asking, given the above discussion on IRX -β, what type of reddening law, and what intrinsic properties of galaxies, are required to find relations compatible with the <cit.> data?
To do this we derived IRX -β curves for two simple scenarios: firstly, the A_1600-β relation was calculated for the <cit.> SMC extinction curve at three values of β_i (-2.5, -2.0, -1.5) and converted into an IRX-β relation assuming BC(1600)_* = 1.75; secondly, the A_1600 - β relation was calculated for a set of empirical attenuation curves of the form A(λ) ∝λ^n at different values of n (-0.55, -1.24, -20.00) and converted into an IRX-β relation assuming a fixed β_i=-2.5 and BC(1600)_* = 1.75.
As highlighted previously, assuming a fixed β_i is a simplification, but nevertheless it is a reasonable assumption for models including nebular continuum, and adequate for the purposes of this discussion.
The derived curves are shown in Fig. <ref>, along with data points showing the three z>5 sources with individual detections, and the stack of undetected sources, from <cit.>.
Fig. <ref> illustrates two important points: firstly, if the reddening law at z=5 resembles the SMC extinction law then the implied β_i of the <cit.> galaxies is β_i≈ -1.5.
This is possible for individual galaxies, although 2/3 of the <cit.> galaxies with ALMA detections have masses in the range probe by our simulations (log(M/M_⊙) = 9.67, 10.17, 10.39) and we do not find intrinsic slopes redder than β_i≃ -2.2.
However, it is clearly incompatible with the average β_i of galaxies at this redshift, since the typical observed slopes are bluer than this value ⟨β⟩≈ -2.0 <cit.>.
Secondly, if we assume β_i=-2.5 (i.e. the <cit.> galaxies are compatible with our simulations) then the slope of the reddening law required to explain the data is much steeper than the SMC-extinction law.
For example, the position of the stack of undetected galaxies would require n < - 20, much steeper than any attenuation, or even extinction law, ever observed.
Given this discrepancy, it is worth considering the potential systematics that could be affecting the <cit.> and <cit.> results.
On possibility is an underestimation of the dust temperature, which is taken to be 35K in both cases.
It has been suggested that dust temperature may increase with redshift due to an increase in the intensity of the radiation field heating the dust <cit.>[This increase in radiation field intensity is also consistent with the increase in ionizing photons in star-forming galaxies inferred from the evolution of optical emission line ratios <cit.>], and typically an upper limit of 45K has been assumed <cit.>.
In this case will be underestimated by a factor ≈ 2 and the resulting IRX underestimated by ≈ 0.3 dex.
The magnitude of this shift is illustrated by the arrow in the bottom right-hand corner of Fig. <ref>.
It would bring the <cit.> and <cit.> data into better agreement with our model, however it is not enough to fully account for the discrepancy.
There are other potential systematic issues affecting the determination of the UV continuum slopes, and estimated upper limits on , which we will discuss in detail in a future work (McLure et al. in prep).
§.§.§ SMC-like attenuation at lower redshifts?
In Fig. <ref> we have not included data at lower redshifts (z ∼ 2 - 3) which also support steeper, SMC-like, attenuation laws.
In particular, <cit.> find that galaxies with young mass-weighted ages and typically lower masses (age ≲ 100 Myr; M/M_⊙ < 10^10) have a smaller IRX, at a given β, relative to older galaxies at the same redshift.
Finally, similar conclusions have also been reached using small numbers of individual lensed galaxies <cit.>.
In <cit.>, these young galaxies represented ≈ 13 % of their full star-forming galaxy sample, and in our simulated sample ≈ 6 % of galaxies have ages < 100 Myr.
However, since our method involves comparing to the observed luminosity function, it can only be applied to the full galaxy population complete down to a given UV magnitude, and cannot be applied to populations split by age.
Therefore, we cannot rule out the possibility that certain populations (e.g. the youngest galaxies), or any given individual galaxy, will follow a steeper attenuation curve.
We also note that, since the absolute attenuations decrease towards fainter, lower mass galaxies, the uncertainties on the form of the attenuation law will correspondingly increase, as the difference in absolute attenuation between curves becomes more difficult to distinguish.
Therefore, our conclusions are most robust at high masses.
Nevertheless, our current analysis suggests that, on average, a Calzetti-like attenuation curve is sufficient to describe the observed properties of the z ≈ 5 galaxy population down to log(M/M_⊙) ≈ 7.5.
§ DISCUSSION
In this section, we discuss in more detail why a reddening law similar to the SMC extinction law is ruled out by our models, and how this relates to the type of dust present at high redshifts; finally we compare our results to the recent study of <cit.>, who performed a similar analysis, but reached different conclusions.
§.§ Why are SMC-like laws ruled out?
It is worth asking, what differences in the intrinsic properties of the simulated SEDs are required such that a steeper attenuation law would be favored over a Calzetti-like attenuation law?
The main ways in which our simulations could be biasing our results in favour a shallower attenuation law are: (i) the intrinsic UV continuum slopes are too red; ii) the intrinsic luminosity at 1500Å is overestimated; (iii) the number density of galaxies at a given mass is overestimated; or some combination of all three.
The guiding principle here is that, for a steep law to be necessary, the change in β for a given A_UV needs to be larger than is currently required by intrinsic simulated galaxy properties.
Therefore, either A_UV is kept the same and the intrinsic β slopes must move further away from the CMR (i.e. become bluer), or the β slopes remain the same and the UV magnitudes of the galaxies must move closer to the LF (i.e. become fainter).
§.§.§ Steeper intrinsic UV continuum slopes?
Firstly, if the the UV continuum slopes are too red then either the UV-weighted ages, or Fe/H values, of the galaxies are being overestimated by the simulations.
The range of UV-weighted ages of the FiBY galaxies is 20 ≲ t_UV≲ 80 Myr with a median of 33 Myr (see Fig. <ref>), and the median intrinsic stellar UV continuum slope across all models is β_i≈ -2.6 (see Table. <ref>).
The bluest UV slopes possible with the current BPASSv2 models are β_i≈ -3.1 (a starburst of age 1 Myr and metallicity 1/10 solar).
Therefore, as a simple test we fixed the intrinsic slopes of all SEDs at this value, and a χ^2 fit with the TEx model was performed as described in Section <ref>.
Indeed, in this case the best-fitting values of n are in the range -1.0 < n < -1.2 (though the fits are formally significantly poorer), nevertheless it serves to demonstrate that if the intrinsic slopes were close to the steepest allowed by current SPS models, SMC-like reddening laws would become more acceptable.
However, to achieve intrinsic slopes approaching this value (β_i≲ -2.9) would require that the UV-weighted ages of our galaxies be in the range 0 - 10 Myr <cit.>.
Naturally this would lead to a boost in the SFRs and sSFRs; for example, in the extreme scenario of all stars being formed within the past 10 Myr the sSFR is 100 Gyr^-1 (20 × the median value of our sample).
However, the SFRs and sSFR of galaxies in the FiBY simulation are in good agreement with observations.
For example, Fig. <ref> compares the star-forming main sequence of our simulated sample to the observed z=5 relation taken from <cit.>, and clearly there is excellent consistency here.
Similarly, the median sSFR of our sample (4.8 Gyr^-1) is consistent with a number observations at z≃5 <cit.>.
Furthermore, as demonstrated in Fig. <ref>, the effect of the nebular continuum increases as the intrinsic stellar continuum becomes bluer, such that the UV continuum slopes are reddened to roughly the same values β_i≈ -2.4.
Finally, for completeness, we briefly note that, if we have underestimated either the ionization parameter or electron density in our photoionization modelling, then we will have underestimated the contribution of the nebular continuum to the galaxy SEDs.
Assuming more extreme values of these parameters would make the intrinsic slopes redder, not bluer.
In summary, it seems unlikely that the intrinsic β slopes at z=5 are being overestimated to the extent that a steep SMC-like reddening law is required to match observations.
§.§.§ Overestimated UV luminosities?
As we discussed previously, the intrinsic M_1500 is relatively insensitive to the metallicity and nebular continuum contribution, but rather tracks the star-formation rate over ≈ 100 Myr timescales.
Therefore, M_1500 will be overestimated if the SFRs of the galaxies are overestimated.
As with the UV continuum slopes, we performed a test whereby the intrinsic M_1500 of the galaxies are increased by an amount δM_1500, and a χ^2 fit with the TEx model carried out; we found that an increase of the order ΔM_1500≈ 1.0 was required to make the data consistent with a steep SMC-like reddening law.
Assuming the <cit.> conversion (in which 1 M_⊙yr^-1 is equivalent to an absolute magnitude of M_1500≃ -18.75) this shift in absolute magnitude corresponds to a factor ≃ 2.5 decrease in SFR, or in log space Δlog(SFR)= -0.4, which would make the data incompatible with the observed main sequence at z=5 (Fig. <ref>).
Again, given the consistency of our data with observed SFRs at the same redshift, it is unlikely that we are overestimating M_1500.
Nevertheless, it is worth bearing in mind that a shift in SFR of this magnitude would be required if the observed stellar masses at these redshifts were overestimated by ≈ 0.4 dex.
However, given the consistency of our data with observed mass - SFR relations at the same redshift, and assuming the observed masses at these redshifts are not systematically overestimated, it is unlikely that we are overestimating M_1500.
§.§ Dust properties at z=5
What do our results mean for the `type' of dust in high redshift galaxies?
Results in the literature, which have pointed towards an attenuation curve at z ≃ 5 similar in shape to the extinction curve of the SMC, have been used to claim that the properties of the dust grains themselves at high redshift are more SMC-like <cit.>.
This seems plausible, since the low metallicity of the SMC <cit.>, is more representative of high-redshift galaxies, and hence the physical properties of the dust grains may also be comparable.
However, it is not necessarily true that SMC-like dust will result in an attenuation curve similar to the SMC extinction curve, due to the significant difference between extinction and attenuation.
This point is illustrated in Fig. <ref>, where we show theoretical attenuation curves from the 3D radiative transfer models of <cit.>.
In these models the output attenuation curve is dependent on the system geometry (for which we have adopted the parameters M_s=20 and R_s/R_d=1, which represent a turbulent clumpy median in which the stars are distributed throughout the total volume), the amount of dust (A_V) and the dust properties, for which we have selected, for illustrative purposes, the Milky Way model from <cit.>, and the SMC model of <cit.>.
The value of A_V we have selected is typical of the median of our sample across all best-fitting models (A_V≈ 0.5).
It can be seen that, even assuming SMC dust properties, the output attenuation curve is much shallower than the SMC extinction curve, though still steeper than the Calzetti curve.
However, it is fully consistent with our best fitting model within 1σ.
Assuming Milky Way dust gives better agreement with the Calzetti curve, though it is slightly shallower.
Therefore, assuming this scenario, a Calzetti-like attenuation curve at z=5 is consistent with dust properties somewhere intermediate between SMC and Milky Way.
Moreover, given the inherent uncertainties in radiative transfer modelling, it is not implausible to think that SMC-like dust can result in a Calzetti-like attenuation curve; indeed, as has been claimed in the past <cit.>.
§.§ Comparison to <cit.>
Recently, <cit.> (M16) performed an analysis similar to the one presented here, comparing the intrinsic CMR and LF of a sample of simulated galaxies at 5 < z < 8 to observations, in order to constrain the attenuation law and subsequent IRX - β relation.
Contrary to our empirical prescription, M16 estimate the absolute attenuation (or the normalization of the reddening curve) using a semi-analytic dust evolution model which attempts to explicitly track dust growth / destruction in stars and supernovae.
Combining this with a variety of attenuation laws and a CF2000-like model, they conclude that matching their simulations to observations re quires a steep SMC-like curve.
In the method adopted by M16, the A_1500 versus stellar mass relation is fixed by the dust model, and their relation is steeper than the one we derive by fitting to the LF (private communication).
This can be seen by inspecting Fig. 2 of M16, where the number counts at faint intrinsic magnitudes (M_UV≈ -18.0) from their adopted simulation are much closer to the observed number counts than the number counts at faint magnitudes from FiBY (see Fig. <ref> of this paper).
In other words, M16 require much smaller dust corrections at lower masses.
Given this, and the fact that the intrinsic UV slopes in M16 appear very similar to the median values we find in our sample (β_i≈ −2.4, Table 1), it is unsurprising that they require a steeper attenuation law.
One probable explanation of this discrepancy is the lower particle mass resolution of the M16 simulation (log(M/M_⊙) = 7.11 compared to 5.68 and 4.81 for the two components of the FiBY simulation) since the stellar mass range of the galaxies in our sample with M_UV≈ -18.0 (7.6 < log(M/M_⊙) < 7.9) would correspond to N ≈ 3 - 10 individual star particles in their simulation (compared to N ≈ 600 - 1000 in FiBY_L).
This low resolution at faint magnitudes may result in an underestimation of the number counts (a well known effect at the resolution limit, e.g. <cit.>), and hence lead to the perception of very little dust in low mass galaxies.
Indeed, comparing Fig 2. of M16 to the intrinsic luminosity functions of <cit.>, who analyse a higher resolution box from the same simulation, would appear to support this explanation.
Without being able to perform a direct comparison to the M16 data it is difficult to identify the source of the discrepancy conclusively.
Nevertheless, we believe that the main driver is the difference in number counts at the faint end of the intrinsic LF and that this is most likely a result of the difference in particle mass resolution of the two simulations.
§ CONCLUSIONS
We have presented an investigation of the dust attenuation law at z=5 using a sample of N=498 galaxies, with M_1500≤ -18.0 and 7.5<log(M/M_⊙)<10.2, taken from the First Billion Years (FiBY) simulation.
Synthetic SEDs were generated for these galaxies using the BPASSv2 SPS models, both including and excluding the effects of massive binaries and nebular emission.
The intrinsic UV properties of these SEDs were then compared to the observed z=5 LF and CMR to constrain the properties of dust attenuation.
Our main findings can be summarized as follows:
* Due to the fact that the galaxies in our sample exhibit mainly rising star-formation histories, and since the age of the oldest star particles in our z=5 simulation (≃ 0.9 Gyr) are less than the typical timescale for the ISM to be enriched with Fe via SNe Ia (≃ 1 Gyr), we find that the galaxies in our sample have α/Fe ratios enhanced by a factor ≈ 4 relative to the solar value.
This is consistent with the recent observation of galaxies at z=2.4 by <cit.>, and suggests care must be taken when modelling the SEDs of high-redshift galaxies, or any galaxies with a rising star-formation history, since the shape of the UV spectrum is most sensitive to the Fe/H ratio.
* Applying a simple dust model (the TEx model) in which stars below a certain age (t_BC) are assumed to be completely obscured by their natal birth clouds, and the remaining stars experience dust attenuation assuming a homogeneous mixture of stars and dust/gas (A(λ)∝λ^n), we find that the observed CMR and LF can only be recreated if t_BC < 3 Myr and -0.6 ≤ n ≤ -0.3.
The constraints on t_BC are consistent with the age of the youngest OB associations observed in local galaxies <cit.>, and the slope of the attenuation curve consistent with a Calzetti-like attenuation curve.
* Assuming a more complicated dust model, similar to that of <cit.>, in which stars with ages < t_BC experience enhanced optical depths rather than being completely obscured, and again assuming A(λ)∝λ^n, we find that an acceptable match to the observed CMR and LF can only be achieved with -0.7 ≤ n ≤ -0.3.
Although other free parameters within the model (t_BC, μ) are not well constrained using only these observations, the formal best-fitting values are consistent with those originally proposed in <cit.>.
* Across both dust models, a steep attenuation law resembling the SMC extinction law is ruled out by our simulations.
This is in contrast to claims based on some recent ALMA observations at high redshift <cit.>, and the simulations of <cit.>.
In order for the simulations to become more compatible with a steeper reddening curve, the intrinsic UV magnitudes must be fainter and/or the intrinsic UV continuum slopes must be steeper.
However, we argue that, based on the good agreement between the simulation data and observations of the star-forming main sequence of galaxies at z=5, it is unlikely that we are overestimating either of these quantities.
Nevertheless, we note that if the observed stellar masses of galaxies at z=5 are being overestimated by ≈ 0.4 dex, this would result in a shift towards more SMC-like curves.
* Comparing the IR predictions of our best-fitting models to the properties of the one z=5 source detected in the recent ALMA HUDF imaging of <cit.>, we find consistency in both the measured physical properties and implied number counts.
Nevertheless, better statistics are clearly needed to make a more robust comparison.
* The A_1600-β and resulting IRX -β relations of our best-fitting models are inconsistent with the recent results of <cit.> and <cit.>, which require an attenuation curve as steep, or steeper, than the SMC extinction curve.
We find that the best-fitting IRX -β curve is consistent with what would be expected for a simple homogeneous star/dust mixture geometry combined with the <cit.> attenuation law assuming all sources have intrinsic UV continuum slopes of β_i=-2.5.
There is clearly some tension here which can only be resolved with future, deeper, ALMA data at these redshifts.
* Our best estimate of the A_1600-β relation at z=5 is A_1600 = 2.10^+1.8_-0.3(β + 2.52); this equation implies that previous estimates of dust-corrected SFRs at these redshifts, based on the <cit.> relation, will have underestimated the SFR by a factor ≈ 2.
We recommend using this relation for any young galaxy population expected to have intrinsically blue UV continuum slopes.
Overall, we find no evidence for any significant variation from the Calzetti-like attenuation curve at z=5.
However, more observations will be needed, both in the rest-frame UV and IR, to settle the question of the shape of the dust attenuation law at z ≥ 5.
§ ACKNOWLEDGMENTS
FC acknowledges the support of the Science and Technology Facilities Council (STFC).
RJM acknowledges the support of the European Research Council via the award of a Consolidator Grant (PI McLure).
JSD acknowledges the support of the European Research Council via the award of an Advanced Grant, and the contribution of the EC FP7 SPACE project ASTRODEEP (Ref. No. 312725).
This research made use of Astropy, a community-developed core Python package for Astronomy <cit.>, NumPy and SciPy <cit.>, Matplotlib <cit.>, IPython <cit.> and NASA's Astrophysics Data System Bibliographic Services.
mnras
|
http://arxiv.org/abs/1701.07471v3 | 20170125201552 | Mixed Higgs-Radion States at the LHC -- a Detailed Study | [
"Amit Chakraborty",
"Ushoshi Maitra",
"Sreerup Raychaudhuri",
"Tousik Samui"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
9.0in
6.5in
-0.5in
0.0in
-0.0in
1.25
0pt
12pt
p/_T
E/_T
.5ex<.5ex∼.5ex>.5ex∼
TIFR/TH/17-07
Mixed Higgs-Radion States at the LHC – a Detailed Study1.05
Amit Chakraborty[E-mail: amit@post.kek.jp]
[2mm]
Theory Center, Institute of Particle and Nuclear Studies,
KEK, 1-1 Oho, Tsukuba, Ibaraki 305-0801, Japan
Ushoshi Maitra[E-mail: ushoshi@theory.tifr.res.in],
Sreerup Raychaudhuri[E-mail: sreerup@theory.tifr.res.in] and
Tousik Samui[E-mail: tousik@theory.tifr.res.in]
[2mm]
Department of Theoretical Physics, Tata Institute of Fundamental Research,
1, Homi Bhabha Road, Mumbai 400 005, India.1.15December 30, 2023 Abstract
1.05 Light radions constitute one of the few surviving possibilities for
observable new particle states at the sub-TeV level which arise in
models with extra spacetime dimensions. It is already known that the
125 GeV state discovered at CERN is unlikely to be a pure radion state, since its decays resemble those of the Standard Model Higgs boson too closely. However, due to experimental errors in the measured decay
widths, the possibility still remains that it could be a mixture of
the radion with one (or more) Higgs states. We use the existing LHC
data at 8 and 13 TeV to make a thorough investigation of this possibility. Not surprisingly, it turns out that this model is already constrained quite
effectively by direct LHC searches for an additional scalar heavier than 125 GeV.
We then make
a detailed study of the so-called `conformal point', where this
heavy state practically decouples from (most of) the Standard Model
fields. Some projections for the future are also included.PACS Nos: 04.60.Bc, 12.60.Fr, 14.80.Cp, 13.85.Rm1.25
1. Introduction
The 2012 discovery<cit.>, at the LHC, of a weakly-interacting light scalar state
— which appears from all current indications to be an elementary Higgs particle — revives the old question of how the mass of such a scalar can remain stable against large electroweak corrections in a theory with a momentum cutoff at some very high scale. This, as is well-known, goes by
the name of the gauge hierarchy problem, or, alternatively, as the fine-tuning problem. It has also been known for several decades that any solution to this problem must invoke new physics beyond the Standard Model (SM) of strong and electroweak interactions.
One of the most elegant solutions of the hierarchy problem is that devised
in 1999 by L. Randall and R. Sundrum (RS)<cit.>. They considered a world with one
extra space dimension, having the topology of a circle folded about a
diameter (𝕊^1/ℤ_2), at either end of which lies
a pair of four-dimensional manifolds – called `branes' – containing
matter. One of these is the
so-called infra-red (IR) brane, where all the SM fields lie, and the other
is the so-called ultra-violet (UV) brane, where we have field elements
comprising a theory of strong[Here `strong' means comparable to
electroweak strength.] gravity.
One can then tune the cosmological constant on the
two branes, as well as that in the 𝕊^1/ℤ_2 bulk, to
obtain a solution of the five-dimensional Einstein equations in the form
of a `warped' metric
ds^2 = e^-2 K R_c ϕη_μν dx^μ dx^ν - R_c^2 dϕ^2
where the 𝕊^1/ℤ_2 `throat' is characterised by the
compactification radius R_c, an angular coordinate ϕ and a curvature parameter
K. It can then be shown that the mass of the Higgs scalar is
generated on the UV brane at a value close to the bulk Planck mass M_5
(itself a little smaller than the four-dimensional Planck mass M_P =
(ħ c/G_N)^1/2), and projected on the IR brane through the
expanding `throat', thereby acquiring the much smaller value
M_H ∼ e^-π K R_c M_5
If we can now tune K R_c ≃ 11.6, we recover the correct ballpark
for the mass of the discovered scalar. This constitutes a neat solution to the hierarchy problem in terms of spacetime geometry, without having recourse to any parameters which are unnaturally large or small. In fact, the Planck scale
is the only fundamental mass scale in this theory.
It is fair to ask, however, whether the parameter K R_c is protected
against small dynamical fluctuations, for
δ M_H/M_H≈ 11.6π δ R_c/R_c
i.e. small fluctuations in the inter-brane distance would lead to magnified
fluctuations in the Higgs boson mass. As the latter is now known to an accuracy
of about 2%, it follows that the inter-brane distance must be stable to an
accuracy of about 5× 10^-4— for which the minimal RS model
has no provision.
A brilliant solution to this was devised by Goldberger and Wise
(1999)<cit.>. If one allows for fluctuations in the size of the extra dimension,
we can rewrite the metric in Eq. (<ref>) as
ds^2 = e^- 2T(x) ϕη_μν dx^μ dx^ν
- [ T(x)/ K]^2 dϕ^2
where the dynamic T(x) replacing KR_c is known as a modulus field. In the minimal RS model, this
is a free field and hence, as mentioned above, there is no constraint at all
on K R_c = ⟨ T(x) ⟩. Goldberger and Wise then augmented
the model by the introduction of a bulk scalar B(x,y), with a mass M_B
and quartic self-interactions on the IR and UV branes, with vacuum expectation values V_IR and V_UV respectively – all
these mass-dimension quantities being in the ballpark of the Planck mass. They were then able
to show that the scalar modulus field T(x) develops a potential
with a minimum at
⟨ T(x) ⟩ = K R_c ≃4/π( K/M_B)^2 lnV_UV/V_IR
which can be easily tuned to the required value 11.6 by varying the unknowns
M_B, V_IR and V_UV without having recourse to unnaturally large
or small numbers. This is consistent with the general philosophy of the RS model.
The modulus field T(x), which is like a dilaton in the fifth dimension, can
be parametrised as a radion
φ(x) = Λ_φ e^-π{T(x) - KR_c}
which has a vacuum
expectation value
Λ_φ = √(24M_5^3/ K) e^-π KR_c
and a mass
M^2_φ = 2 K^2/M_5^3(V_UV - V_IR)^2
e^-2π KR_c
Because of the warp factor e^-π KR_c , both the radion mass M_φ
and the radion vacuum
expectation value Λ_φ lie at or around the electroweak scale. Hence,
it is easier, for phenomenological purposes, to treat them as the free parameters
in the theory, rather than the set { K, M_5, V_UV, V_IR}.
It is also worth noting that if we let V_UV = V_IR, in which
case Eq. (<ref>) tells us that the radion is massless, we would also
have R_c = 0 from Eq. (<ref>), i.e. the two branes would coalesce
and M_H immediately shoot up to M_5— which takes us back to the Standard Model
and the hierarchy problem. We conclude, therefore, that V_UV > V_IR and hence
the radion must be massive.
The interactions of the radion with matter on the IR brane will naturally
follow those of the dilaton (which it is a variant of) and can be written as
L_ int(φ) = 1/Λ_φ φ( T_μ^μ
+ A_T )
where T_μν is the tree-level energy-momentum tensor and A_T
is the trace anomaly. For on-shell particles, the tree-level T_μ^μ has
the explicit form
T_μ^μ = ∑_f m_f f̅ f + M_H^2 H^2 - 2M_W^2 W^+μW^-_μ
- M_Z^2 Z^μ Z_μ
where the sum runs over all fermions f. This, apart from the
A_T term, is exactly like the coupling of
the Higgs boson, except that the SM vacuum expectation value v is replaced
by the radion vacuum expectation value Λ_φ. Not surprisingly,
radion phenomenology is very similar to Higgs boson phenomenology. It differs,
however, in the anomaly term
A_T = ∑_i β(g_i)/2g_i F^μν i F_μν^i
where β(g_i) is the beta function corresponding to the coupling g_i
of the gauge field A_i which has the field strength tensor F_μν^i.
The sum over i runs over all the gauge fields in the SM, including photons,
gluons and W^± and Z bosons. The A_T term induces
substantial couplings of the
radion to γγ and gg pairs, which are completely absent in
Eq. (<ref>). On the other hand, similar anomaly-induced
contributions to radion couplings with W^+W^- and ZZ pairs are
usually negligible
compared to the corresponding terms in Eq. (<ref>), because
of the large masses of these particles, and only become significant when their tree-level couplings to one of the scalars vanishes.
Like the Higgs boson, the tree-level radion couplings in Eqn. (<ref>) would
be subject, in addition to the trace anomaly contributions, to radiative corrections,
especially from loops involving the top quark. Moreover, it is worth mentioning that there could be large brane corrections to the above couplings if the mass of the radion
is comparable to the Kaluza-Klein scale <cit.>, determined by the mass of the
lightest graviton mode in the minimal RS construction. To avoid this, we require a radion
which is comparatively light, and this requires a modest level of fine tuning
<cit.>. The discussions in this article are, therefore, subject to this assumption.
As remarked above, the phenomenological behaviour of such a light radion is rather
similar to that of the Higgs boson. This naturally leads one to ask whether
these two low-lying elementary scalar states can mix, since they carry the
same set of conserved quantum numbers, once the electroweak symmetry has been broken. In fact, this is possible, as was first pointed out in
Ref. <cit.> and has been discussed by many others<cit.>.
Before proceeding further, it may be noted that there are several phenomenological models with fermions and gauge bosons accessing the bulk<cit.>, which have better control over the flavour problem. In these models, the top quark remains close to the TeV brane along with the Higgs field while the other fermions are close to the UV brane. This suppresses the
higher-dimensional operators contributing to flavour-changing neutral currents, since the effective interaction of fermions with the Higgs field is governed by the overlap of their profiles and hence this scenario
naturally generates the
pattern of fermion masses and mixings. These models predict heavy Kaluza-Klein particles on the TeV brane having masses in the range of a TeV. However, the radion and Higgs fields, being still close to the TeV brane,
mix more-or-less without bulk effects <cit.>.
Hence, the mixing can be understood fairly
accurately using a minimal model
where all the relevant particles are confined to the TeV brane[The only caveat to this is the fact that heavy Kaluza-Klein excitations of the
top quark may contribute to Higgs production at a hadron collider through loop diagrams. However, if these excitations are at the level of a TeV, the corresponding loop
contributions are not more than a few percent and may be safely neglected
— as we have done in this work.], for this is, after all, no more than
approximating a sharply-peaked function by a delta function.
In the following section, therefore, we briefly discuss, following Refs. <cit.> how the radion-Higgs field mixing may be
described in terms of a single
mixing parameter ξ. The next section then describes constraints on the
mixed Higgs-Radion scenario, as obtained using all experimental inputs
currently available, especially those from the LHC. For easy
comparison, we include projections of the discovery reach of the LHC alongside
the current constraints. Before concluding, we include a short section on
the so-called `conformal point' near ξ = 1/6, which has unique features.
While some of the observations in this paper echo previous ones<cit.>, the data
used are current, leading to new bounds, and, for ease of reading, we have
presented our findings in a manner such that this paper can be read as far
as possible independently of the preceding literature.
2. Radion-Higgs mixing
Mixing of the radion field φ(x) with the Higgs scalar h(x) of the SM
has been discussed by several authors <cit.>, with the same broad
features, but we choose to closely follow the formalism of
Ref. <cit.>.
The mixing occurs through the kinetic terms
L = 1/2∂^μ h ∂_μ h
- 1/2 M_h^2 h^2 + β/2∂^μφ ∂_μφ - 1/2 M_φ^2 φ^2
+ 6γξ ∂^μφ ∂_μ h
where γ≡ v/Λ_φ, v being the SM Higgs vacuum
expectation value. In this
formalism, the mixing parameter appears twice – once in the mixing term
6γξ ∂^μφ ∂_μ h, and once in the
non-canonical normalisation β≡ 1 + 6γ^2ξ of the radion
kinetic term. As is usual, the Higgs boson mass is given by
M_h^2 = 2λ v^2,
where λ is the Higgs quartic coupling and v is the Higgs vacuum
expectation value.
We note that the presence of the non-canonical normalisation β means that
the identification of physical states H and Φ will involve a scaling
as well as a rotation of states, i.e. a non-unitary transformation. Hence, we
write the unphysical states φ, h as linear combinations of the
physical ones Φ, H, with real coefficients A,B,C and D, thus
φ = A Φ + B H
h = C Φ + D H ,
where the coefficients A,B,C and D are given by
A = -1/Zcosθ
B = 1/Zsinθ
C = sinθ + 6γξ/Zcosθ
D = cosθ - 6γξ/Zsinθ
in terms of
Z^2 = β - (6γξ)^2
and a mixing angle θ, defined by
tan 2θ = 12γξ Z M_h^2/M_φ^2 - M_h^2 (Z^2 - 36γ^2ξ^2)
The mixing parameter ξ is immediately constrained by the requirement that
Z^2 > 0 to get a real mixing angle.
The mass eigenvalues of the physical eigenstates Φ and H
are now given by
M^2_Φ,H = 1/2Z^2( M^2_φ + β M^2_h
±√((M^2_φ + β M^2_h)^2 - 4Z^2 M^2_φ M^2_h))
where the sign is chosen to ensure that M_H < M_Φ. We identify the
lighter state H as the scalar state of mass around 125 GeV which was
discovered at the CERN LHC in 2012, while the other state Φ is
a heavier scalar state predicted in the model.
From these formulae, it is clear that the free parameters in question are
M_h, M_φ, Λ_φ and ξ, everything else being
computable in terms of them. We also note in passing that since M^2_h =
2λ v^2, this makes
the Higgs quartic coupling λ an unknown quantity in this model, just
as it used to
be in the Standard Model before the identification of the 125 GeV scalar
with the Higgs boson[This is a reflection of the fact that we
still do not have a direct measurement of λ. All that we have is the
estimate λ = (125 GeV)^2/2v^2 ≃ 0.129— which is true only if the 125 GeV state is purely a SM Higgs boson without any admixture of new states.].
Instead of the Lagrangian parameters M_h and M_φ, however, we
find it
more convenient to use the physical masses M_H and M_Φ, which can
be traded for the previous two by some simple algebra, leading to
M_φ^2 = Z^2/2[ M_Φ^2 + M_H^2 +
√((M_Φ^2 + M_H^2)^2 - 4β M_Φ^2 M_H^2/Z^2)]
M_h^2 = Z^2/2β[ M_Φ^2 + M_H^2 -
√((M_Φ^2 + M_H^2)^2 - 4β M_Φ^2 M_H^2/Z^2)]
Since we
identify M_H = 125 GeV, we are left with a set of only three independent
parameters, viz. M_Φ, Λ_φ and ξ. The rest of our
analysis will be presented in terms of these variables.
We now have another theoretical constraint, apart from Z^2 > 0. This
is the requirement that the parameters M_φ and M_h be real
(to keep the Lagrangian Hermitian), which automatically means that
(M_Φ^2 + M_H^2)^2 > 4β M_Φ^2 M_H^2/Z^2
Imposing both these constraints reduces the possible range of ξ,
for a given M_Φ and Λ_φ, quite significantly
(see below).
Since the mixing of the h and the φ to produce the physical H and
the Φ is non-unitary, we define two mixing indicators as follows.
We first invert Eq. (<ref>) to write
Φ = a φ + b h
H = c φ + d h ,
where
( [ a b; c d ])
= ( [ A B; C D ])^-1 .
In terms of this, we now define indicators
f_φ/H = |c|/|c|+|d|
f_h/Φ = |b|/|a|+|b|
which, in a sense, indicate the fraction of radion φ in the light state H, and the fraction of Higgs boson h in the heavy state Φ. These, together with the mixing angle θ defined in Eq. (<ref>), are plotted in Fig. <ref>, as a function of the mixing parameter ξ.
In each of the three panels in Fig. <ref>, we have four boxes placed
one above the other, corresponding to choices of four
different values of the radion vacuum
expectation value, viz. Λ_φ = 1, 5, 10 and 20 TeV respectively (marked in the respective boxes). Within each box, the curves
are colour-coded, with black, green, red and blue indicating benchmark
choices of the
heavy scalar mass as M_Φ = 250 GeV, 500 GeV, 750 GeV and 1 TeV
respectively (indicated at the top of the figure). Each curve ends abruptly
at some maximum and minimum values of the mixing parameter ξ– this is
a reflection of the theoretical limitations (see above). As may be seen from the different plots, this restriction
is extremely stringent when Λ_φ is small, and even when we push
Λ_φ as high as 20, does not permit the value of |ξ| to exceed
15. If we consider the panel on the left, it is clear that we get significant
values of the mixing angle θ only when the heavy Φ state is as light
as around 250 GeV. For values of M_Φ of 500 GeV or greater, θ does
not exceed 10^0. However, since the mixing is not unitary, the smallness of
θ is not necessarily an indicator of small mixing. This becomes clear
if we look at the central and right panels of Fig. <ref>, which
tell us the proportion of the radion in the 125 GeV state, and the proportion of
the Higgs boson in the heavier state respectively. In each case, as |ξ|
increases, the mixing becomes more, starting from zero when |ξ| = 0 to
about equal mixtures when |ξ| reaches its maximum theoretically-permitted value. The purpose of this paper is, as explained above, to
see how far such large mixings are allowed in the light of current experimental data.
We next consider the effect of mixing on the couplings of the two scalar
states to the SM fields. As shown in Ref. <cit.>, the tree-level
couplings
of the heavy Φ state to pairs of SM fields XX̅ (except
X = H) have the form
g_Φ XX̅ = g_φ XX̅( C + γ A )
≡ c_Φ g_φ XX̅
where g_φ XX̅ can be read off from Eqs. (<ref>–<ref>), and c_Φ = C + γ A is a scaling factor.
Similarly, the couplings of the light 125 GeV state have the form
g_H XX̅ = g_h XX̅( D + γ B )
≡ c_H g_h XX̅
where g_h XX̅ are the SM couplings and c_H = D + γ B is
a scaling factor. Very different from these is the coupling of the heavy
scalar to a pair of light scalars, since all three fields
are mixed states, and this
can be written <cit.> for a Φ(p)-H(k_1)-H(k_2) vertex, as
g_HH = 1/Λ_φ[ (k_1^2 + k_2^2 )
{ AD^2 + 6ξ B( CD + γ AD + γ BC ) }.
+ . D{ 12γξ AB + 2BC + (6ξ - 1)AD } p^2
- 4M_h^2 D(AD + 2BC) - 3 M_h^2CD^2/γ]
The couplings of the scalars H and Φ with other particles are
conveniently listed in the Appendix of Ref. <cit.>.
To get a feeling of how these couplings are affected by the variation
in the basic parameters ξ, Λ_φ and M_Φ, we plot
them in Fig. <ref> on a scheme similar to that in Fig. <ref>. The three panels show, from left to right, the
scaling factors c_Φ and c_H, and the coupling g_HH respectively.
As in Fig. <ref> it is immediately clear that for ξ = 0,
c_Φ is very small (small enough to appear as zero on this scale),
as befits a radion with a small coupling to matter, whereas c_H = 1 indicating that the lighter scalar is the SM Higgs boson. Similarly,
for ξ = 0,
the g_HH coupling is very small (small enough to appear as zero on
this scale), indicating that the heavy scalar couples only weakly to a
pair of light scalars. There are also genuine zeroes in the couplings, which
are discussed in more detail in Section 4.
An interesting feature of both Fig. <ref> and Fig. <ref> is the fact that the variation in parameters
is rather slow for smaller values of ξ, but is very sharp
for larger values just before the unphysical region. These larger
values of the scaling factor and Φ HH coupling are likely to
have phenomenological consequences at observable levels, and hence
are more likely to be constrained by experimental data. In the
next section, we shall see that this is indeed the case.
3. Experimental Constraints
We are now in a position to apply the experimental constraints to
this model. Since the two scalars H and Φ are the crucial elements,
the main constraints will come from
(a) the measured signal strengths μ_XX of the 125 GeV
scalar in its decay channels to XX̅ pairs – these are known to
match reasonably closely to the SM predictions, leaving only limited room
for a mixed state;
(b) the lack of signals for a heavy scalar in the range of a few hundred
GeV to about a TeV – by implication, any new scalar would be very heavy
and mix only marginally with the SM Higgs boson.
In principle, the scalars could also contribute as virtual states to
any neutral current processes. However, as most of these are suppressed
by the small masses of the initial states (either e^± or u and d
quarks), we do not really get any useful constraints from these processes.
Constraints from electroweak precision tests are not very strong
<cit.>.
In the rest of this sections, therefore, we concentrate on the two
issues listed above.
We first take up the signal strengths of the 125 GeV scalar H. This decays
into several channels
H ⟶ X + X̅
where X = ℓ^-, u, d, s, c, b, W, Z, γ, g with one of X or
X̅
being off-shell in the case of W and Z. At the LHC, the H is produced
dominantly through gluon-gluon fusion[In our numerical analysis, we
have also included the vector boson fusion mode.]. Hence, we can define signal
strengths μ_XX as
μ_XX =
σ(pp → gg → H)_ exp ℬ(H → XX̅)_ exp/σ(pp → gg → H)_ SM ℬ(H → XX̅)_ SM
where σ and B stand for cross-section and branching
ratio respectively, and the subscripts `SM' and `exp' mean the SM
prediction and the experimental
value respectively. If we are making a theoretical prediction, then `exp' will
stand for the expected value in the theoretical model in question — in the
present case, the model with radion-Higgs mixing. Of course, in an experiment
only the entire numerator on the right side of Eq. (<ref>) can be
measured and not the individual factors. By this definition, then, all the
SM signal strengths are normalised to unity, and experimental deviations from
it constitute the leeway for new physics. These allowed experimental deviations
are given in Table <ref>.
Obviously, for zero mixing, the signal strengths predicted for the H
scalar will be the same as the SM values, i.e. unity. As ξ increases,
we should expect deviations from unity, and indeed that is what happens,
as illustrated, in Fig. <ref>. The three panels, from
left to right, correspond to choices of M_Φ = 250, 500 and 750 GeV
respectively. The graph for M_Φ = 1 TeV is very similar to that for
M_Φ = 750 GeV, and hence we do not show it explicitly. Likewise,
the actual graphs for μ_γγ are slightly different, but not
enough to show up on a plot at this scale. Each curve
in the panels corresponds to the value of Λ_φ, in TeV, written
alongside, i.e. 1, 2, 3, 5 and 10 TeV respectively. The steepness of
the curves decreases with increasing Λ_φ, for which we also
have larger permitted ranges in ξ, as we have earlier shown in
Fig. <ref>. Horizontal broken lines in
Fig. <ref> represent the useful 95% C.L. constraints from
the signal strengths in Table. <ref>, and are marked on the
right side of the figure.
The behaviour of the predicted signal strengths with increasing ξ is
quite as expected, remaining close to the SM value for small ξ and
showing large deviations near the edge of the theoretically-allowed range.
This, as we have seen earlier, is due to the large deviations of the coupling
of the H from the SM coupling at such values of ξ. It is thus obvious
that the present constraints from signal strengths will only affect narrow
strips of the parameter space adjacent to the theoretically-disallowed region,
and this, in fact, is what we find (see below). It may be noted in passing
that a region of the parameter space where D + γ B ≃ 0 would be
very strongly constrained from the signal strengths, but this does not happen
anywhere inside the region allowed by theoretical considerations.
When we turn to the heavy Φ state, once again the main production mode
is through gluon-gluon fusion, but now there is no analogous SM prediction and hence one looks for the direct signals in
the various decay channels of the Φ. As in the case of the light scalar, the
potentially observable ones are Φ→γγ<cit.>,
WW<cit.>,
ZZ<cit.> and
τ^+τ^-<cit.> to which we can now add Φ→ tt̅ and Φ→ HH<cit.>.
The bb̅<cit.> signal would be difficult to distinguish from the QCD
background, unless the mass of the Φ scalar is very well known, as in the
case of the H scalar. The behaviour of all these branching ratios, as
functions of the scalar mass M_Φ is shown in Fig. <ref>,
where Λ_φ is fixed to 5 TeV and the panels, from left to right,
correspond to ξ = 0 (no mixing), and ξ = 1, 2 and 3 respectively.
The relevant decay channel is marked alongside each curve. These curves
terminate at the left end where they correspond to theoretically-disallowed
regions in the parameter space.
One feature which is immediately obvious from these curves is the fact
that the scalar Φ decays dominantly through the WW and ZZ channels.
When the mixing is low, the HH channel is also competitive, but as ξ
rises, it gets suppressed. In any case, the signals from the WW and ZZ
channels are leptonic and clean, whereas the signals arising from HH,
dominantly leading to 4b final states, are hadronic, as are those arising
from the direct decays of
the Φ into quark pairs. These hadronic channels are generally suppressed
compared to WW and ZZ, and, in any case, would be plagued by large QCD
backgrounds. It may be still possible to investigate the tt̅ and HH
channels, using jet substructure-based tagging methods for boosted particles,
but such experimental searches are still not competitive
<cit.>. Thus, in principle, we get
constraints from every decay channel of the Φ, but the most
useful ones will arise from the ATLAS and CMS search results for a heavy
scalar resonance decaying
to WW and ZZ pairs, which are equally applicable to the Φ scalar
in the model under consideration. As is well-known, the experimental results are all negative, and hence the 95% C.L. upper limits on the cross-section
are given in Table <ref>.
We are now in a position to compare these data with the predictions of our
theory. As in the case of the H state, the cross section for
pp →Φ→ VV, where V = W,Z, can be written
σ(pp →Φ→ VV) = σ(pp → gg →Φ)
ℬ(Φ→ VV)
where ℬ(Φ→ VV) is the branching ratio of the Φ to a
VV pair. These can be calculated in terms of the free parameters
ξ, M_Φ and Λ_φ respectively. Our results are shown
in Fig. <ref>.
The four upper panels of Fig. <ref> represent the cross-section, in pb, for the process
pp →Φ→ WW and the lower four panels represent the process
pp →Φ→ ZZ. In each row the panels correspond, from left to
right, to M_Φ = 250 GeV, 500 GeV, 750 GeV and 1 TeV, respectively.
Within each panel, the curves show the variation of the cross-section
with the mixing parameter ξ, for different values of the radion
vacuum
expectation value, corresponding to different colours, as marked in the legend above
the panels. The horizontal solid lines correspond to the CMS bounds from
the 13 TeV data, as shown in Table. <ref>, while the broken lines
correspond to the ATLAS 13 TeV data.
All the curves have a distinct
minimum at a small value of ξ varying from 0.2 to 2 — this corresponds
to a minimum in the cross-section σ(pp → gg →Φ) where
there is maximal cancellation in the amplitude for gg →Φ
due to the top quark loop and the trace anomaly term. In this region, the heavy scalar can be produced in association with a W^±/Z and it further decays to WW or ZZ pairs,
leading to a final state with three gauge bosons or their decay products.
In view of the low production cross-sections for higher values of Λ_φ, one has to consider hadronic decays of one or more of these gauge
bosons, and this immediately invites a large QCD background at the LHC.
However, the region can be successfully probed at a high energy
e^+ e^- collider (such as the proposed ILC) with √(s) = 1 TeV <cit.>.
In addition to the dip described above, there is a very sharp minimum,
very close to the vertical axis, which corresponds to the so-called
`conformal' point, where c_Φ→ 0. We defer the discussion of
this point to the next section and focus here on the constraints obtainable
from the rest of the parameter space. Here, as in the case of signal
strengths the constraints rule out larger values of ξ, with the exact
bound depending on the other two parameters of the theory.
From Figs. <ref> and <ref> we can
draw some general conclusions. The first is that the effect of increasing
the mixing parameter ξ becomes weaker and weaker as the vacuum
expectation value Λ_φ keeps increasing. This is true both for the signal strengths in
Fig. <ref> as well the cross-section in Fig. <ref> and is easy to track down as due to the limiting
case γ→ 0. A similar argument may be made for the parameter
M_Φ– at least numerically – though the parameter dependence here is
much more complicated.
We may argue, therefore, that for a fixed ξ,
the region with small M_Φ and small Λ_φ is more constrained
— which also corresponds to the commonsense argument that if these
parameters are small, radion-mediated processes are large and vice versa.
These expectations are corroborated by our results shown in
Fig. <ref>. Here we show the Λ_φ–M_Φ plane
for four different values of ξ, viz. ξ = -0.5, 0, 1 and 1.5,
as marked
on each panel. As indicated in the key at the top, the region shaded grey
corresponds to the theoretically disallowed region, and includes all values of
Λ_φ < 1 TeV, except in the panel on the top left,
marked ξ = 0, which corresponds to the case of an un-mixed radion
of mass M_Φ.
Here, though values of Λ_φ < 1
TeV are theoretically permitted, the experimental constraints do not allow them,
as is apparent from the figure. In all the panels, the dark grey shaded region
is ruled out by the signal strengths at Runs 1 and 2 and the hatched regions by the
ATLAS and CMS searches for a heavy scalar at Run-2 of the LHC.
These are the strongest
constraints and represent the state of the art as far as current
experimental data are concerned[We have, in fact, considered
constraints from all the channels separately, but the others are
subsumed in the ones shown in the figure, and hence are not shown in
order to have uncluttered figures.]. The jagged shape of the curves reflects
the fact that the LHC has, till now, collected quite a small amount
of data for rare processes like the decay of a heavy scalar.
However, the LHC has the potential to
search much further, and this is shown by the red and yellow-shaded regions,
which represent, respectively, the expectations from the signal strength measurements if μ_XX = 1 ± 0.05 for all X, and the ATLAS and
CMS discovery limits at 95% C.L. for the heavy Φ if the
LHC were to run at 14 TeV and collect 3000 fb^-1 of data<cit.>— which
may not be too far from the reality. For the panel with ξ = 0, there
are no constraints from the signal strengths, since the H is completely
SM-like; but the constraints from the heavy scalar searches are quite strong
because that scalar is a pure radion. A comparative study of the four plots
indicates that the value ξ≈ 1 would permit the largest part of
the parameter space to survive consistently negative results from LHC, while
negative values of ξ are better suited to a discovery of the heavy scalar
predicted in this theory.
Coming to constraints on ξ, it is clear from Figs. <ref>
and <ref> that ξ = 0, which
corresponds to the 125 GeV scalar being the Standard Model Higgs boson —
not surprisingly — is always allowed by the signal strength data. For given
values of M_Φ and Λ_φ,
ξ can range on the positive and negative side, but when its magnitude
grows larger, all new physics effects grow and, at some point, higher
magnitudes of ξ get disallowed – first by the experimental constraints
and then by the requirement of theoretical consistency. For low values of
Λ_φ and M_Φ, we arrive at this point for fairly low values of
ξ. As both these parameters increase, however, the allowed range
grows, creating a funnel-like shape, which grows wider as Λ_φ and
M_Φ increase. This is illustrated in Fig. <ref>, where we
show the Λ_φ-ξ plane for the same choices of M_Φ as in
the earlier figures. The shading and hatching conventions of this figure
are exactly the same as those of Fig. <ref>. It is immediately
obvious that for low values of Λ_φ close to 1 TeV, the range of
ξ is severely constrained by theoretical consistency alone. A heavy
scalar of mass 250 GeV is also rather severely constrained, except for
a narrow cone, which will shrink further when the LHC finishes its run.
Constraints ease up for a heavier scalar, since that is much more difficult
to find. It is interesting that even if LHC completes its run without
finding any evidence for a heavy scalar up to 1 TeV, there will be a range
of parameter space where this model is still allowed. However, for these
parameters, the 125 GeV will be so similar to the SM Higgs boson, and the
interactions of the heavy scalar will be so heavily suppressed that the
model may no longer be interesting, at least from a phenomenological point
of view.
An interesting feature of all the plots in Fig. <ref> is the
needle-thin sliver of allowed parameter space which appears in every graph
close to the vertical axis. This corresponds, in every case, to the
`conformal point' mentioned above, where all constraints from a heavy
scalar search weaken considerably. This region – though extremely fine-tuned –
is interesting in its own right, and therefore we carry out a detailed study in
the next section.
4. The Conformal Point
As explained before, for every choice of M_Φ and Λ_φ, there is
a fixed value ξ = ξ_0 which satisfies the equation c_Φ = 0, and
hence
C(ξ) + γ A(ξ) = 0
and this is known as the `conformal' point[From this stage we drop
the quotes on `conformal'.]. It corresponds to the case when the tree-level
couplings g_Φ XX̅ of both the fermions and gauge bosons – generically
denoted X– with the heavy scalar Φ vanish. This is a curious situation
and corresponds to the case when the mixing is fine-tuned to be such that the
parts of the coupling arising from the SM h and the radion φ cancel
each other. Like all fine-tuned situations, if this is the reality, it can
hardly
be a random effect, and must represent some deeper structure in the theory, which
is not addressed in our present formulation. Nevertheless, it is interesting to
explore the phenomenological implications of this scenario. In this section,
therefore, we investigate the conformal point and see how it can be constrained using current and projected data, just as the other points can.
It is important to note that though most of the tree-level couplings
of the Φ to pairs of SM particles vanish at the conformal point (except
for the coupling to HH pairs), there exist one-loop couplings to pairs
of gauge bosons through the trace anomaly. This makes the pattern of
branching ratios at the conformal point very different from that in
other regions of parameter space. The most important feature of this is
the fact that the decays Φ→ gg and Φ→γγ are
considerably enhanced with respect to the others – in fact the former
is the dominant decay mode. This behaviour is nicely exhibited in
Fig. <ref>, where we exhibit the behaviour of the relevant
branching ratios in the immediate vicinity of the conformal point.
In Fig. <ref>, it is immediately apparent that for the particular
value ξ = ξ_0, the tree-level decay modes of Φ→ XX̅,
where X is a massive gauge boson or a fermion, drop sharply by many orders of
magnitude. This is particularly true for the cases X = t, b and H,
with the minimum for the last case occurring at a slightly displaced
point from the others (best seen in the zoomed panel on the right).
On the other hand, the branching ratios for the purely
one-loop decays, viz. Φ→ gg and Φ→γγ exhibit
a growth at the same point, attributable to their partial decay widths being
finite, whereas the others drop almost to zero. However, the decays to WW and
ZZ states do not disappear altogether because they too have anomaly
contributions. Naturally the decay Φ→ gg dominates the others because
of the appearance of the strong coupling as well as the colour factor. The
decay Φ→γγ also shows a gentle increase, but is intrinsically
much more rare than the digluon mode. At the conformal point, therefore, constraints on the model will have to be sought in a different
fashion. One obvious way is to consider Higgs boson signal
strengths, for if the couplings of the Φ vanish that does not mean that
the couplings of the H will also vanish.
Accordingly, there will be contributions to the signal
strengths and these can be used to constrain the model. In fact, even the
heavy scalar searches, i.e. pp → S → VV, where V = W,Z can be used
to a limited extent, since the branching ratios Φ→ VV, though small
at ξ = ξ_0, are not absolutely negligible. However – and this is a
distinct feature of the conformal point – the strongest bounds come from
diphoton searches, which is not entirely surprising, given that this mode is
considerably enhanced at the conformal point.
In trying to understand how the conformal point is constrained by the data,
we need to recognise that the conformal point ξ_0 is not unique, but a
function of M_Φ and Λ_φ, with the dependence on the
former being much stronger than that on the latter. Its variation with M_Φ is shown
in the upper panel of Fig. <ref>, where the thickness
of the line corresponds to variation of Λ_φ
from 1 TeV to 20 TeV.
This plot shows that the variation flattens out as M_Φ
grows above 500 GeV, and has a very weak dependence on Λ_φ. Nevertheless, we have scanned a sizeable portion
of the M_Φ–Λ_φ plane
and calculated the values of ξ_0 at every point by solving
Eq. (<ref>).
improvement can be obtained if these
measurements yield results much closer to the SM prediction.
The shaded yellow region represents the predictions from ZZ decay modes of
a heavy scalar for the LHC running at 14 TeV with 3000 fb^-1<cit.> of data
(which is all that is currently available), and it does worse than the Run-2 data.
It may be expected that diphoton searches would provide better discovery limits — when the Run-2 projections become available.
All in all, we can conclude that the
conformal point is somewhat less constrained than the rest of the parameter
space. It was this narrow window which had been used <cit.> to
explain the purported discovery of a heavy 750 GeV scalar during 2015-2016
<cit.>,
though that proto-signal did not survive the test of time<cit.>.
5. Summary and Outlook
The minimal Randall-Sundrum model continues to be one of the most elegant ways of
solving the hierarchy problem, and it works best if there is a Goldberger-Wise
stabilisation, which works best if there is a light radion state. Though
there are strong constraints on such a light radion per se, there remains room
for a light radion mixed with the SM Higgs boson to survive. In this article,
we have explored this possibility, using an existing formalism, in the light of
current data from the LHC Runs 1 and 2. Our findings are summarised below.
The possibility of a radion-Higgs mixing arises essentially because we have no
independent measurement of the Higgs boson self coupling λ, so
that the SM formula M_h^2 = 2λ v^2 is open to other interpretations.
One of these is the mixed radion-Higgs scenario, where the lighter eigenstate
is identified with the 125 GeV scalar discovered at the LHC.
In this model, there are three free parameters,
viz. the mixing parameter ξ, the mass M_Φ of the heavy scalar Φ,
and the radion vacuum expectation value Λ_φ. However,
self-consistency of the theory imposes fairly stringent constraints on the
choices of the mixing parameter ξ. These, as we show,
are further constrained by (a) the signal strengths measured for the decays
of the 125 GeV scalar at the LHC, and (b) the search for a heavy scalar
decaying into a pair of electroweak vector bosons, be they W's, Z's or
photons. These lead to further bounds on the parameter space, essentially
pushing Λ_φ above a TeV (and hence reducing all radion-mediated
effects) and M_Φ to values closer to a TeV, though here some avenues
for a lighter M_Φ remain.
In addition to the current data, we have tried to predict discovery limits
at the LHC in two ways. One way is to use the signal strengths, and assume
that they will eventually converge within 5% of the SM prediction. This
leads to modestly enhanced bounds on the radion-Higgs mixing scenario. The
other way is to use the projected discovery limits from the ATLAS and CMS
Collaborations for a heavy scalar in Run-2, where we identify that heavy
scalar with our heavier eigenstate Φ. This, in fact, is very effective
for most choices of the mixing parameter ξ and is sensitive to rather high
values of M_Φ and Λ_φ. The only exception is at the so-called
conformal point, which is a peculiar feature of this model, involving a value of
the mixing parameter where the heavy scalar essentially decouples from SM fields.
Even this is constrained, however, by the signal strengths and by the diphoton
decay mode, which, being generated by the trace anomaly, survives the vanishing
of tree-level couplings. However, the smallest values of M_Φ and
Λ_φ are, indeed, allowed if this scenario were to be true.
It is interesting to ask how our results would be modified if we replace
the simplistic model used above with a more phenomenologically-relevant
model where the fields can access the bulk. As explained in the Introduction,
the radion and Higgs fields, being still close to the TeV brane, mix in the same manner <cit.>. The decay of the radion to the
light quarks is severely suppressed because of the small overlap <cit.> of their wavefunctions in the bulk. Decays of the radion to massive gauge bosons are governed by an additional coupling that can be safely neglected for Λ_φ 1 TeV.
Radions decaying to massless gauge boson pairs (especially to diphotons)
is significantly enhanced, however, due
to the tree-level coupling in the case of bulk scenario. However, this doesn't really effect our region of interest <cit.>. We
feel, therefore, that the results of this work are robust against
more realistic variations of the minimal model and may be safely adopted
in such cases.
To conclude, then, we have shown that a mixed radion-Higgs scenario is quite
consistent with the current experimental data at the LHC, and there is every
possibility that the heavy scalar predicted in this model could be discovered
as the LHC continues to run at its present energy of 13 TeV. Discovery of this
would certainly be one of the most exciting things to happen in the near future,
and, if, the branching ratios turn out to be consistent with this model, could
provide a powerful insight into the nature of spacetime itself. Such a happy
consummation is to be devoutly hoped for, but, for the present, we must reconcile
ourself to a fairly long wait as the Run-2 of the LHC continues.
Acknowledgements: The authors acknowledge useful discussions with Debjyoti
Bardhan, Disha Bhatia and Abhishek Iyer. The work of SR was partly funded by the
Board of Research in Nuclear Sciences, Government of India, under project no.
2013/37C/37/BRNS.
99Higgs
G. Aad et al. [ATLAS Collaboration], Phys. Lett. B 716, 1 (2012);
S. Chatrchyan et al. [CMS Collaboration], Phys. Lett. B 716, 30 (2012).
warped
L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 3370 (1999).
GWmechanism
W. D. Goldberger and M. B. Wise, Phys. Rev. Lett. 83, 4922 (1999).
Chacko
See, for example,
Z. Chacko, R. K. Mishra and D. Stolarski, JHEP 1309, 121 (2013);
B. Bellazzini et al., Eur. Phys. J. C 73, no. 2, 2333 (2013);
Z. Chacko, R. K. Mishra, D. Stolarski and C. B. Verhaaren, Phys. Rev. D 92, 056004 (2015);
D. Elander and M. Piai, arXiv:1703.09205 [hep-th].
R-H-mixing
G. F. Giudice, R. Rattazzi and J. D. Wells, Nucl. Phys. B 595, 250 (2001).
R-H-mixing1
C. Csaki, M. L. Graesser and G. D. Kribs, Phys. Rev. D 63, 065002 (2001).
R-H-mixing2
D. Dominici et al., Nucl. Phys. B 671, 243 (2003).
Huber:2000ie
S. J. Huber and Q. Shafi, Phys. Lett. B 498, 256 (2001).
Gherghetta:2000qt
T. Gherghetta and A. Pomarol, Nucl. Phys. B 586, 141 (2000).
Grossman:1999ra
Y. Grossman and M. Neubert, Phys. Lett. B 474, 361 (2000).
Agashe:2006at
K. Agashe, R. Contino, L. Da Rold and A. Pomarol, Phys. Lett. B 641, 62 (2006).
Iyer:2015ywa
A. M. Iyer, K. Sridhar and S. K. Vempati, Phys. Rev. D 93, 075008 (2016).
bulk_radion
C. Csaki, J. Hubisz and S. J. Lee, Phys. Rev. D 76, 125015 (2007).
R-H-pheno
K. m. Cheung, Phys. Rev. D 63, 056007 (2001);
M. Chaichian, A. Datta, K. Huitu and Z. h. Yu, Phys. Lett. B 524, 161 (2002);
A. Datta and K. Huitu, Phys. Lett. B 578, 376 (2004);
P. K. Das, S. K. Rai and S. Raychaudhuri, Phys. Lett. B 618, 221 (2005);
H. de Sandes and R. Rosenfeld, Phys. Rev. D 85, 053003 (2012);
V. Barger, M. Ishida and W. Y. Keung, Phys. Rev. Lett. 108, 101802 (2012);
H. Kubota and M. Nojiri, Phys. Rev. D 87, 076011 (2013);
G. C. Cho, D. Nomura and Y. Ohno, Mod. Phys. Lett. A 28, 1350148 (2013);
N. Desai, U. Maitra and B. Mukhopadhyaya, JHEP 1310, 093 (2013);
P. Cox et al., JHEP 1402, 032 (2014);
J. Cao et al., JHEP 1401, 150 (2014);
D. W. Jung and P. Ko, Phys. Lett. B 732, 364 (2014);
H. Kubota and M. Nojiri, Phys. Rev. D 90, no. 3, 035006 (2014);
E. Boos et al., Phys. Rev. D 90, no. 9, 095026 (2014);
S. Bhattacharya et al., Phys. Rev. D 91, 016008 (2015);
P. R. Archer et al., JHEP 1501, 060 (2015);
A. Efrati et al., Phys. Rev. D 91, no. 5, 055034 (2015);
E. E. Boos et al., Phys. Rev. D 92, no. 9, 095010 (2015).
precision
J. F. Gunion, M. Toharia and J. D. Wells, Phys. Lett. B 585, 295 (2004).
SS_ATLCMS_comb
G. Aad et al. [ATLAS and CMS Collaborations], JHEP 1608, 045 (2016).
SS_CMS_gmgm_13tev
CMS Collaboration [CMS Collaboration], CMS-PAS-HIG-16-020.
SS_ATL_gmgm_13tev
The ATLAS collaboration [ATLAS Collaboration], ATLAS-CONF-2016-067.
SS_CMS_ZZ_13tev
CMS Collaboration [CMS Collaboration], CMS-PAS-HIG-16-041.
HH_CMS_gmgm_8tev
V. Khachatryan et al. [CMS Collaboration], Phys. Lett. B 750, 494 (2015).
HH_ATL_gmgm_13tev
The ATLAS collaboration [ATLAS Collaboration], ATLAS-CONF-2016-059.
HH_CMS_gmgm_13tev
CMS Collaboration [CMS Collaboration], CMS-PAS-EXO-16-027.
HH_ATL_WW_8tev
G. Aad et al. [ATLAS Collaboration], JHEP 1601, 032 (2016).
HH_CMS_WWandZZ_8tev
V. Khachatryan et al. [CMS Collaboration], JHEP 1510, 144 (2015).
HH_ATL_WW_13tev
The ATLAS collaboration [ATLAS Collaboration], ATLAS-CONF-2016-062.
HH_CMS_WW_13tev
CMS Collaboration [CMS Collaboration], CMS-PAS-HIG-16-023.
HH_ATL_ZZ_8tev
G. Aad et al. [ATLAS Collaboration], Eur. Phys. J. C 76, no. 1, 45 (2016).
HH_ATL_ZZ_13tev
The ATLAS collaboration [ATLAS Collaboration], ATLAS-CONF-2016-079.
HH_CMS_ZZ_13tev
CMS Collaboration [CMS Collaboration], CMS-PAS-HIG-16-033.
HH_ATL_tautau_8tev
G. Aad et al. [ATLAS Collaboration], JHEP 1411, 056 (2014).
HH_CMS_tautau_8tev
CMS Collaboration [CMS Collaboration], CMS-PAS-HIG-14-029.
HH_ATL_tautau_13tev
The ATLAS collaboration [ATLAS Collaboration], ATLAS-CONF-2016-085.
HH_CMS_tautau_13tev
CMS Collaboration [CMS Collaboration], CMS-PAS-HIG-16-006.
HH_ATL_hh_8tev
G. Aad et al. [ATLAS Collaboration], Phys. Rev. Lett. 114, no. 8, 081802 (2015);
G. Aad et al. [ATLAS Collaboration], Eur. Phys. J. C 75, no. 9, 412 (2015);
G. Aad et al. [ATLAS Collaboration], Phys. Rev. D 92, 092004 (2015).
HH_CMS_hh_8tev
V. Khachatryan et al. [CMS Collaboration], Phys. Lett. B 749, 560 (2015);
V. Khachatryan et al. [CMS Collaboration], [arXiv:1603.06896 [hep-ex]];
CMS Collaboration [CMS Collaboration], CMS-PAS-HIG-15-013.
HH_ATL_hh_13tev
The ATLAS collaboration, ATLAS-CONF-2016-004;
The ATLAS collaboration, ATLAS-CONF-2016-017.
HH_CMS_hh_13tev
CMS Collaboration [CMS Collaboration], CMS-PAS-HIG-16-002;
CMS Collaboration [CMS Collaboration], CMS-PAS-HIG-16-032;
CMS Collaboration [CMS Collaboration], CMS-PAS-HIG-17-002.
HH_CMS_bb
CMS Collaboration [CMS Collaboration], CMS-PAS-HIG-16-025.
ATLAS:2016ixk
The ATLAS collaboration [ATLAS Collaboration], ATLAS-CONF-2016-049.
Frank:2016oqi
M. Frank et al., Phys. Rev. D 94, no. 5, 055016 (2016).
HH_ZZ_ATL_14tev
The ATLAS Collaboration, ATL-PHYS-PUB-2013-016.
HH_ZZ_CMS_14tev
CMS Collaboration [CMS Collaboration], CMS-PAS-FTR-13-024.
radion_conf
A. Ahmed et al., arXiv:1512.05771 [hep-ph];
D. Bardhan et al., arXiv:1512.06674 [hep-ph].
750GeV
The ATLAS collaboration, ATLAS-CONF-2015-081;
CMS Collaboration, CMS-PAS-EXO-15-004.
Toharia:2008tm
M. Toharia, Phys. Rev. D 79, 015009 (2009).
|
http://arxiv.org/abs/1701.07884v1 | 20170126214458 | Band-structure-dependence of renormalization-group prediction on pairing channels | [
"Yi-Ting Hsu",
"Alejandro Federico Rebola",
"Craig J. Fennie",
"Eun-Ah Kim"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"cond-mat.str-el"
] |
Department of Physics, Cornell University, Ithaca, New York 14853, USA
School of Applied and Engineering Physics, Cornell University, Ithaca, New York 14853, USA
School of Applied and Engineering Physics, Cornell University, Ithaca, New York 14853, USA
Department of Physics, Cornell University, Ithaca, New York 14853, USA
Kavali Institute of Theoretical Physics
Recent experimental advances in using strain engineering to significantly alter the band structure of moderately correlated systems offer opportunities and challenges to weak-coupling renormalization group (RG) analysis approaches for predicting superconducting instabilities. On one hand, the RG approach can provide theoretical guidance. On the other hand, it is now imperative to better understand
how the predictions of the RG approach depends on microscopic and non-universal model details. Here we focus on the effect of band-selective mass-renormalization often observed in angle resolved photoemission spectroscopy. Focusing on a specific example of uniaxially strained Sr_2RuO_4 we carry out the weak-coupling RG analysis from two sets of band structures as starting points: one is based on density functional theory (DFT) calculations and the other is based on angle-resolved photoemission spectroscopy (ARPES) measurements.
Despite good agreement between the Fermi surfaces of the the two band structures
we find the two sets of band structures to predict qualitatively different trends in the strain dependence of the superconducting transition temperature T_c as well as the dominant channel.
Band-structure-dependence of renormalization-group prediction on pairing channels
Eun-Ah Kim
December 30, 2023
=================================================================================
§ INTRODUCTION
Strain of magnitude that can significantly alter the band structure of correlated materials recently reached via bulk strain <cit.> or epitaxial strain<cit.> now offers a new axis of control beyond traditional means.
This new dimension presents both opportunity and challenge for theory.
On the bright side, weak-coupling renormalization group (RG) approaches<cit.> that take the band structure based on microscopic information as starting points can now aspire to guide experimental efforts <cit.>. Nonetheless, such proximity to reality puts higher bar on the theory. Especially, importance of better understanding
the sensitivity of the RG predictions against microscopic details cannot be understated.
Here, we focus on the impact of band-specific mass renormalization that can affect the RG prediction for dominant pairing channel in a qualitative manner. It is well-known that band structures obtained using DFT inaccurately predict bandwidth. In a single-band system, this is often remedied through overall rescaling. Now growing interest in multi-band systems have presented a new challenge: the discrepancy in the band mass, often referred to as “mass renormalization”, is often band selective<cit.>. Under such band-specific mass renormalization there is no simple way to reconcile the discrepancy between the dispersion of density functional theory (DFT) based band structure and that of angle-resolved photoemission spectroscopy (ARPES) measurements, even when the two band structures exhibit virtually identical Fermi surfaces (FSs). Now the critical question is the possible impact of such mass renormalization in possible superconducting instability. We investigate this issue in the context of strain-dependence of superconducting instability in Sr_2RuO_4.
Partially driven by the fact that Sr_2RuO_4 is the leading candidate material for a two-dimensional (2D) topological superconductor<cit.>, recent strain-engineering efforts and careful study of experimental band structure focused on the material. In particular, bulatBRO reported band-specific mass renormalization. bulatBRO also reported that ruthenate films can undergo Lifshitz transition upon epitaxial biaxial strain of order 1.6%<cit.>. More recently, HicksVHS found the superconducting transition temperature (T_c) of Sr_2RuO_4 to change upon a bulk compressive uniaxial strain and peak at a certain magnitude [see Fig. <ref>(a)].
They further found that the upper critical field shows a decreased anisotropy at the T_c maximum, which indicates the pairing to become spin-singlet at the point.
Nonetheless, the large mass renormalization found in the ARPES data of biaxially strained ruthenates suggests possible discrepancies between the DFT-based band structures and actual band structure under uniaxially strain.
In particular, the band-selective mass renormalization in Sr_2RuO_4 was found to be at the order of 30% larger in the quasi-2D band than in the quasi-1D bands depend on the strain magnitude<cit.>.
The purpose of this article is to point out a fact that has been so far under-appreciated by the community: the perturbative RG results could be very sensitive to details in the input band structures. This would imply that extra attention is necessary in attempts to connect RG predictions with experiments.
To demonstrate our point, we will take the uniaxially strained Sr_2RuO_4 as an example to contrast the RG results based on the two sets of tight-binding parameters: one obtained from a DFT calculation, and the other from extrapolating the ARPES data in the absence of strain.
Then, we will compare the two sets of results to the measured strain-dependent T_c<cit.>.
§ THE MODEL AND APPROACH
Our model for the uniaxially strained Sr_2RuO_4 is a three-band Hubbard model derived from the Ru t_2g orbitals d_xz, d_yz, and d_xy:
H(ϵ)= ∑_k⃗ασE^α_k⃗(ϵ)c^†_k⃗,α,σc_k⃗,α,σ
+U∑_iαn_i,α,↑n_i,α,↓,
where ϵ<0 denotes the compressive uniaxial strain along [100] direction. Here, k⃗=(k_x,k_y), α=xz,yz,xy, σ=↑,↓ denote the crystal momentum, the orbital index, and the spin respectively, and n_i,α,σ≡ c^†_i,α,σc_i,α,σ.
We employ the following tight-binding parameterization for intra-orbital kinetic energies:
E^xz_k⃗(ϵ)=-2t_x(ϵ)cos k_x-2t^⊥_y(ϵ)cos k_y-μ_1(ϵ)
E^yz_k⃗(ϵ)=-2t_y(ϵ)cos k_y-2t^⊥_x(ϵ)cos k_x-μ_1(ϵ)
E^xy_k⃗(ϵ)=-2t'_x(ϵ)cos k_x-2t'_y(ϵ)cos k_y
-4t”(ϵ)cos k_xcos k_y-μ_2(ϵ),
where we neglect the orbital-mixing terms [Although
ScaffidiSROV found the spin-orbit coupling to significantly affect the nature and mechanism of pairing in the unstrained system, the van Hove singularities which sit at point X=(π,0) and Y=(0,π) lie in the region of the FS where orbital characters are well-defined<cit.>.
Hence, we expect the absence of orbital-mixing terms in our model would not affect our conclusions in a qualitative manner.].
The dispersions of the three bands in Eq. (<ref>) lead to two quasi-1D FSs comprising the Ru orbitals d_xz and d_yz, and
one quasi-2D FS comprising the
Ru orbital d_xy.
For the bare interaction, we focus on the
repulsive intra-orbital on-site repulsion U>0<cit.> given the experimentally observed unconventional pairing in as-grown Sr_2RuO_4 <cit.>.
Our model above differs from the model in Ref. HicksVHS in that
the latter has emphasized the inter-band coupling. Furthermore, starting from their DFT-based band structure, they found the quasi-1D bands to be the leading pairing channels although it is the 2D band that goes through the Lifshitz transition. Therefore, the inter-band repulsion U' of a significant magnitude of 0.84U was cruicial for the predominantly 1D-band-driven superconductivity to nevertheless show the experimentally observed peak in T_c as a function of uniaxial strain while the 2D band is closer to the van Hove singularity in the model of Ref. HicksVHS. Here, we focus on the results in the absence of
the inter-orbital repulsion U'. Nevertheless, we have checked that the inter-orbital U'≤ 0.5U makes no qualitative difference to the results we report in this paper.
Our main concern is the effects on the pairing instability from the mass renormalization, which is often present in measured band structures, despite the calculated and measured FSs could be qualitatively similar [see Fig. <ref>]. Thus in the following,
we contrast and compare the RG predictions starting from two sets of tight-binding parameters E^α_k⃗(ϵ) in Eq. (<ref>): 1) the parameters fitted to DFT calculations with varying degree of strain, and 2) the parameters fitted to the available unstrained ARPES data and strained appropriately. For the first set of parameters we performed DFT calculations by fixing the [100] lattice constants to the desired strain value and by letting all internal parameters as well as transverse lattice constant to fully relax. All our DFT calculations were performed with VASP <cit.>, using the PBEsol Exchange-correlation functional, a plane-wave basis cutoff of 520 eV and a 12x12x12 k-point sampling of the Brillouin zone. The band structure thus obtained was then used to fit the tight-binding model in Eq. (<ref>).
For the second set of parameters,
we use the parameters extracted from the ARPES data of an unstrained Sr_2RuO_4<cit.> at zero strain ϵ=0. As no ARPES data is currently available under uniaxial strain ϵ< 0, we determine the tight-binding parameters under strain by extrapolating the unstrained ARPES-extracted parameters. For this, we determine the percentage change of each parameter p(ϵ) under strain from the first set of DFT-extracted parameters by p(ϵ)≡ t_x^DFT(ϵ)/t_x^DFT(0) -1. We then estimate each strained parameter starting from the ARPES-measured parameter for unstrained system as t_x(ϵ)=t_x(0)[1+p(ϵ)].
The key difference between the first and the second set of parameters is the band-selective mass renormalization that has been measured in the ARPES data of Ref. bulatBRO in the absence of strain.
Although it is well-known that DFT often underestimates band mass,
band-selective mass renormalization in multi-band system poses challenges that have been under-appreciated.
It turns out that the mass renormalization is focused on 2D bands in Sr_2RuO_4, which enhances the density of states of the 2D band at the Fermi level substantially.<cit.>[
Here for the second set of parameters, we have effectively assumed the bias in mass renormalization stays constant as the uniaxial strain increases though the ARPES data is yet not available.]
To study the dominant pairing channels under strain, we then carry out a two-step perturbative RG analysis with the microscopic model being Eq. (<ref>) and (<ref>) with the two sets of tight-binding parameters.
For completeness, we now briefly review the perturbative two-step RG approach<cit.> we adopt.
In the first step, we numerically evaluate the effective pairing vertices in different channels at some intermediate energy scale E=Λ_0 near the Fermi level by integrating out the higher-energy modes down to Λ_0.
Up to the one-loop order, the singlet and triplet effective pairing vertices Γ_s/t^α(k̂,k̂'̂) at energy Λ_0 are related to the repulsive bare interaction U and the non-interacting static particle-hole susceptibilities Π_ph^α(q⃗) for band α at momentum q⃗ through
Γ_s^α(k̂,k̂'̂)= U+U^2Π_ph^α(k̂+k̂'̂),
and
Γ_t^α(k̂,k̂'̂)= -U^2Π_ph^α(k̂-k̂'̂),
where k̂^(') are the outgoing (incoming) momenta on the FS of band α.
Now, the pairing tendency of band α in the singlet and triplet channels can be quantified by the most negative eigenvalue λ̃_s/t^α≡λ_s/t^α(E=Λ_0) of a dimensionless matrix g_s/t^α(k̂,k̂'̂), which is a
product of the density of states ρ^α
on the Fermi surface of the band α
and the normalized effective pairing vertices at the energy scale Λ_0:
g_s/t^α(k̂,k̂'̂)=ρ^α√(v̅_̅F̅^α/v_F^α(k̂))Γ_s/t^α(k̂,k̂'̂)√(v̅_̅F̅^α/v_F^α(k̂'̂)).
Here, v_F^α(k̂) is the magnitude of Fermi velocity at k̂, and 1/v̅_̅F̅^α≡∫dp̂/S_f^α1/v_F^α(p̂) with S_F^α≡∫ dp̂ being the FS `area' of band α.
In the second step, we study the evolution of the most negative eigenvalues λ_s/t^α(E) for different channels (α, s/t) as the energy E lowers from Λ_0 to 0. Given the well-known RG flow for the Cooper instability,
dλ_s/t^α/dy=-(λ_s/t^α)^2 with the RG running parameter being y≡log(Λ_0/E)<cit.>,
we can relate T_c to the critical energy scale at which the most divergent λ_s/t^α(y) among all channels diverges as<cit.>
T_c∼ W^αe^-1/|λ̃|,
where W^α is the bandwidth of the dominant band α, and λ̃ is the most negative λ̃_s/t^α among all channels.
§ RESULTS
Using the first set of tight-binding parameters obtained from DFT, we find the critical energy scale defined in Eq. (<ref>) to increase monotonically with the compressive
uniaxial strain in [100] direction ϵ<0 [see Fig. <ref>(b)].
This is because the active band is the yz-orbital-based 1D band whose density of states monotonically increase with the compressive strain, as opposed to that of the xy-orbital-based 2D band which peaks at the strain amount ϵ_VHS, where the 2D band FS goes through Lifshitz transition.
The 1D band dominates over the 2D band despite the fact that the 2D band density of states ρ^xy is slightly larger than that of the 1D band ρ^yz. This is
because the particle-hole susceptibility of the 1D band peaks sharply at q⃗∼(π,2k_F) due to the high degree of nesting. This is a feature shared between our DFT-based band structure and the DFT-based band structure used by HicksVHS. Similarly, HicksVHS also found the 1D band
to dominate the pairing instability. It was only through a substantial inter-orbital coupling U' could HicksVHS find the T_c scaling to peak riding on the van Hove singularity touched by the 2D band FS.
By contrast, the T_c calculated using the second set of parameters based on ARPES data peaks as a function of strain even in the absence of any inter-orbital coupling. This is because the 2D band is now the active band due to the mass renormalization which is substantially more severe on the 2D band<cit.>. Hence, when the 2D FS goes through Lifshitz transition at the strain amount ϵ_VHS, T_c peaks [see the dashed line in Fig. <ref>(c)].
Note that in the close vicinity of the van Hove singularity at (±π,0), the parity-even singlet dominates over the parity-odd triplet pairing tendency as the latter is expected to be suppressed by the symmetry<cit.>[The triplet tendency is expected to vanish right at ϵ_VHS which is not captured in Fig. <ref>(c) because the perturbative RG analysis cannot access the non-perturbative regime.].
Interestingly, the prediction for the peak in T_c and the dominance of the singlet pairing in the close vicinity of the peak agrees with what was observed in the experiment [see Fig. <ref> (a)]. The fact that key experimental features are robustly reproduced by the RG prediction with a simple model for interactions is rather appealing.
To summarize, we investigated how perturbative RG predictions for superconducting instability depends often on the understated aspects of the band structure beyond Fermi surface. We found that in a multi-band model, balance between mass renormalizations of different bands can change the balance between different pairing channels, and thus the qualitative trends of pairing properties under external knobs.
Motivated by recent experimental findings (1) T_c peaking at a finite percentage of uniaxial strain, and (2) singlet pairing near the peak, we investigated the specific example of uniaxially strained Sr_2RuO_4 with two sets of band structures. We found the two band structures to host qualitatively different trends in T_c as a function of strain: while the DFT-based band structure fails to reproduce the observed peak in T_c in the absence of strong inter-orbital interaction<cit.>, the ARPES-based band structure reproduces the observed peak even with a simple Hubbard type model. This shows band-selective mass renormalizations can affect balance between different superconducting channels, hence calls for realistic band-structure information to accompany strain engineering studies.
Acknowledgement –
Y-TH was supported by the Cornell Center for Materials Research with funding from the NSF MRSEC program (DMR-1120296) and E-AK was supported by U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Science and Engineering under Award DE-SC0010313. E.-A.K. acknowledges Simons Fellow in Theoretical Physics Award #392182 and thanks hospitality of KITP supported by Grant No. NSF PHY11- 25915. AFR and CJF acknowledge support from the NSF grant no. DMR-1056441.
23
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Hicks et al.(2014)Hicks,
Brodsky, Yelland, Gibbs,
Bruin, Barber, Edkins,
Nishimura, Yonezawa, Maeno, and Mackenzie]Hicks18042014
author author C. W. Hicks, author D. O. Brodsky,
author E. A. Yelland, author A. S. Gibbs, author
J. A. N. Bruin, author
M. E. Barber, author
S. D. Edkins, author
K. Nishimura, author
S. Yonezawa, author
Y. Maeno, and author
A. P. Mackenzie, http://www.sciencemag.org/content/344/6181/283.abstract journal journal Science volume
344, pages 283 (year 2014)NoStop
[Steppke et al.(2017)Steppke, Zhao, Barber, Scaffidi, Jerzembeck, Rosner, Gibbs, Maeno, Simon, Mackenzie, and Hicks]HicksVHS
author author A. Steppke, author L. Zhao,
author M. E. Barber, author T. Scaffidi, author
F. Jerzembeck, author
H. Rosner, author A. S. Gibbs, author Y. Maeno, author S. H. Simon, author A. P. Mackenzie, and author C. W. Hicks, 10.1126/science.aaf9398 journal
journal Science volume 355 (year 2017), 10.1126/science.aaf9398NoStop
[Burganov et al.(2016)Burganov, Adamo, Mulder, Uchida, King, Harter, Shai,
Gibbs, Mackenzie, Uecker,
Bruetzam, Beasley, Fennie,
Schlom, and Shen]bulatBRO
author author B. Burganov, author C. Adamo,
author A. Mulder, author M. Uchida, author
P. D. C. King, author
J. W. Harter, author
D. E. Shai, author
A. S. Gibbs, author
A. P. Mackenzie, author
R. Uecker, author M. Bruetzam, author M. R. Beasley, author C. J. Fennie, author D. G. Schlom, and author K. M. Shen, 10.1103/PhysRevLett.116.197003
journal journal Phys. Rev. Lett. volume 116, pages 197003 (year
2016)NoStop
[Raghu et al.(2010a)Raghu, Kivelson, and Scalapino]Twostep
author author S. Raghu, author S. A. Kivelson,
and author D. J. Scalapino, 10.1103/PhysRevB.81.224505 journal journal Phys. Rev. B volume 81, pages 224505 (year 2010a)NoStop
[Raghu et al.(2010b)Raghu, Kapitulnik, and Kivelson]SriSRO
author author S. Raghu, author A. Kapitulnik, and author S. A. Kivelson, 10.1103/PhysRevLett.105.136401 journal
journal Phys. Rev. Lett. volume 105, pages 136401 (year 2010b)NoStop
[Scaffidi et al.(2014)Scaffidi, Romers, and Simon]ScaffidiSROV
author author T. Scaffidi, author J. C. Romers, and author S. H. Simon, 10.1103/PhysRevB.89.220510 journal
journal Phys. Rev. B volume 89, pages 220510 (year 2014)NoStop
[Chubukov et al.(2008)Chubukov, Efremov, and Eremin]IronScChubukov
author author A. V. Chubukov, author D. V. Efremov, and author I. Eremin, 10.1103/PhysRevB.78.134512 journal journal Phys. Rev. B volume
78, pages 134512 (year 2008)NoStop
[Hsu et al.(2016)Hsu,
Cho, Rebola, Burganov,
Adamo, Shen, Schlom,
Fennie, and Kim]BROsc
author author Y.-T. Hsu, author W. Cho, author A. F. Rebola, author
B. Burganov, author
C. Adamo, author K. M. Shen, author D. G. Schlom, author C. J. Fennie, and author E.-A. Kim, 10.1103/PhysRevB.94.045118 journal journal Phys. Rev. B volume
94, pages 045118 (year 2016)NoStop
[Kallin(2012)]SROreviewKallin
author author C. Kallin, http://stacks.iop.org/0034-4885/75/i=4/a=042501
journal journal Reports on Progress in Physics volume 75, pages 042501 (year 2012)NoStop
[Ishida et al.(1998)Ishida,
Mukuda, Kitaoka, Asayama,
Mao, Mori, and Maeno]Ishida1998
author author K. Ishida, author H. Mukuda,
author Y. Kitaoka, author K. Asayama, author
Z. Q. Mao, author Y. Mori, and author Y. Maeno, http://dx.doi.org/10.1038/25315
journal journal Nature volume 396, pages 658 (year
1998)NoStop
[Nelson et al.(2004)Nelson,
Mao, Maeno, and Liu]Nelson12112004
author author K. D. Nelson, author Z. Q. Mao,
author Y. Maeno, and author Y. Liu, http://www.sciencemag.org/content/306/5699/1151.abstract journal journal Science volume
306, pages 1151 (year 2004)NoStop
[Kidwingira et al.(2006)Kidwingira, Strand, Van Harlingen, and Maeno]Kidwingira24112006
author author F. Kidwingira, author J. D. Strand, author D. J. Van Harlingen, and author Y. Maeno, http://www.sciencemag.org/content/314/5803/1267.abstract
journal journal Science volume 314, pages 1267 (year
2006)NoStop
[Kirtley et al.(2007)Kirtley, Kallin, Hicks, Kim, Liu, Moler, Maeno, and Nelson]PhysRevB.76.014526
author author J. R. Kirtley, author C. Kallin,
author C. W. Hicks, author E.-A. Kim, author
Y. Liu, author K. A. Moler, author Y. Maeno, and author K. D. Nelson, 10.1103/PhysRevB.76.014526
journal journal Phys. Rev. B volume 76, pages 014526 (year
2007)NoStop
[Curran et al.(2014)Curran,
Bending, Desoky, Gibbs,
Lee, and Mackenzie]PhysRevB.89.144504
author author P. J. Curran, author S. J. Bending,
author W. M. Desoky, author A. S. Gibbs, author
S. L. Lee, and author
A. P. Mackenzie, 10.1103/PhysRevB.89.144504 journal journal Phys.
Rev. B volume 89, pages 144504
(year 2014)NoStop
[Jang et al.(2011)Jang,
Ferguson, Vakaryuk, Budakian,
Chung, Goldbart, and Maeno]Jang14012011
author author J. Jang, author D. G. Ferguson,
author V. Vakaryuk, author R. Budakian, author
S. B. Chung, author
P. M. Goldbart, and author
Y. Maeno, http://www.sciencemag.org/content/331/6014/186.abstract journal journal Science volume
331, pages 186 (year 2011)NoStop
[Note1()]Note1
note Although @citet ScaffidiSROV found the
spin-orbit coupling to significantly affect the nature and mechanism of
pairing in the unstrained system, the van Hove singularities which sit at
point X=(π ,0) and Y=(0,π ) lie in the region of the FS where orbital
characters are well-defined<cit.>. Hence, we
expect the absence of orbital-mixing terms in our model would not affect our
conclusions in a qualitative manner.Stop
[Kresse and Furthmüller(1996)]DFT1
author author G. Kresse and author J. Furthmüller, 10.1103/PhysRevB.54.11169 journal journal Phys. Rev. B volume
54, pages 11169 (year 1996)NoStop
[Blöchl(1994)]DFT2
author author P. E. Blöchl, 10.1103/PhysRevB.50.17953 journal journal Phys. Rev. B volume
50, pages 17953 (year 1994)NoStop
[Note2()]Note2
note Here for the second set of parameters, we have effectively
assumed the bias in mass renormalization stays constant as the uniaxial
strain increases though the ARPES data is yet not available.Stop
[Shankar(1994)]ShankarRG
author author R. Shankar, 10.1103/RevModPhys.66.129 journal journal Rev. Mod. Phys. volume
66, pages 129 (year 1994)NoStop
[Yao and Yang(2015)]YaoVHSII
author author H. Yao and author F. Yang, 10.1103/PhysRevB.92.035132 journal journal Phys. Rev. B volume 92, pages 035132 (year 2015)NoStop
[Nandkishore et al.(2014)Nandkishore, Thomale, and Chubukov]ChubukovHexSC
author author R. Nandkishore, author R. Thomale, and author A. V. Chubukov, 10.1103/PhysRevB.89.144501 journal journal Phys. Rev. B volume
89, pages 144501 (year 2014)NoStop
[Note3()]Note3
note The triplet tendency is expected to vanish right at
ϵ _VHS which is not captured in Fig. <ref>(c)
because the perturbative RG analysis cannot access the non-perturbative
regime.Stop
|
http://arxiv.org/abs/1701.07556v2 | 20170126025946 | First-principles calculations of the magnetic and electronic structures of MnP under pressure | [
"Yuanji Xu",
"Min Liu",
"Ping Zheng",
"Xiangrong Chen",
"Jin-guang Cheng",
"Jianlin Luo",
"Wenhui Xie",
"Yi-feng Yang"
] | cond-mat.supr-con | [
"cond-mat.supr-con",
"cond-mat.str-el"
] |
First-principles calculations of the magnetic and electronic structures of MnP under pressure]First-principles calculations of the magnetic and electronic structures of MnP under pressure
^1Beijing National Laboratory for Condensed Matter Physics and Institute of Physics, Chinese Academy of Sciences, Beijing 100190, China
^2School of Physical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China
^3College of Physical Science and Technology, Sichuan University, Chengdu 610065, China
^4Collaborative Innovation Center of Quantum Matter, Beijing 100190, China
^5Department of Physics, Engineering Research Center for Nanophotonics and Advanced Instrument, East China Normal University, Shanghai 20062, China
yifeng@iphy.ac.cn
Manganese monophosphide (MnP) shows complicated magnetic states varying with both temperature and pressure. We calculate the magnetic and electronic structures of MnP at different pressures using first-principles methods and obtain spiral ground states whose propagation vector changes from the c-axis at low pressure to the b-axis at high pressure. In between, we find a ferromagnetic state, as observed in the experimental phase diagram. The propagation vector of the spiral states is found to vary nonmonotonically with pressure, consistent with neutron measurements. Our results indicate that the complicated magnetic phase diagram originates from a delicate competition between neighboring exchange interactions between the Mn-ions. At all pressures, the electronic structures indicate the existence of quasi-one-dimensional charge carriers, which appear in the ferromagnetic state and become gapped in the spiral state, and anisotropic three-dimensional charge carriers. We argue that this two-fluid behavior originates from the special crystal structure of MnP and may be relevant for understanding the pairing mechanism of the superconductivity at the border of the high pressure spiral phase.
71.15.Mb, 71.20.-b, 75.10.-b
Keywords: density functional theory, non-collinear magnetism, electronic structure
[
Yuanji Xu^1,2, Min Liu^1,3, Ping Zheng^1, Xiangrong Chen^3, Jin-guang Cheng^1, Jianlin Luo^1,2,4, Wenhui Xie^5 and Yi-feng Yang^1,2,4,*
===========================================================================================================================================
§ INTRODUCTION
Recently, MnP was found to display superconductivity with T_c≈ 1K at pressure around 7-8 GPa near the border of a long-range magnetic phase <cit.>. As the first Mn-based superconductor, it immediately raises the question concerning its pairing mechanism <cit.>. The fact that superconductivity emerges at the border of a long-range magnetic order points to the possibility of magnetic glues due to spin fluctuations <cit.> as has been proposed for the CrAs superconductivity <cit.>, but first-principles calculations in the framework of the weak-coupling BCS (Bardeen-Copper-Schrieffer) theory could also yield the correct transition temperature T_c based solely on the electron-phonon coupling <cit.>. The issue therefore remains controversial. Moreover, we still lack a good theoretical understanding of the associated magnetic and electronic structures, possibly due to the complicated phase diagram involving multiple magnetic orders varying with pressure and temperature, as shown in figure <ref>(a) <cit.>.
The magnetic orders in MnP have been a subject of many experimental and theoretical studies since 1960s <cit.>. At ambient pressure, it has an orthorhombic structure with Pnma space group and the lattice constants a = 5.236, b = 3.181 and c = 5.896 Å <cit.>. Neutron diffraction experiment has provided very precise determination of its magnetic structure and revealed a paramagnetic (PM) to ferromagnetic (FM) phase transition at about 292 K and a ferromagnetic to spiral phase transition at about 47 K <cit.>. In the ferromagnetic phase, all spins align in parallel to the b-axis, whereas in the spiral phase, the spins rotate within the ab plane with a propagation vector Q=(0, 0, 0.117), indicating a periodicity of about nine lattice units along the c-axis (hereafter called Spi-c), as shown in figure <ref>(c) <cit.>. Inelastic neutron scattering experiments on spin wave excitations <cit.> suggest that the spiral order may originate from the competition between different types of exchange interactions, as proposed in earlier theoretical work <cit.>. Under pressure, the spiral phase is first suppressed and replaced by a ferromagnetic ground state in a narrow pressure window around 1.2 GPa. Above 1.5 GPa, a new anti-ferromagnetic-like order appears and is then suppressed at very high pressure (7-8 GPa). Superconductivity emerges near the magnetic quantum critical point <cit.>.
The magnetic structure of the high pressure phase has been extensively studied using neutron powder diffraction (NPD) <cit.>, magnetic X-ray diffraction (XRD) <cit.>, nuclear magnetic resonance (NMR) <cit.>, and muon-spin rotation (μSR) <cit.>, but the results remain controversial. While NPD and μSR indicate that the propagation vector changes from c-axis at ambient pressure to b-axis (Spi-b) at high pressure, XRD suggests that it remains along the c-axis but with a short periodicity. On the other hand, the NMR results imply a spiral structure at 2 GPa, but the exact magnetic structure cannot be resolved. The spin structure of the candidate high pressure Spi-b phase is shown in figure <ref>(d) <cit.>. The complicated magnetic phase diagram is probably related to the change of the lattice parameters with pressure. As shown in figure <ref>(b), while both a and c-axises change only slightly, the b-axis lattice parameter was found to decrease dramatically with pressure <cit.>. However, the crystal symmetry remains the same without a structural phase transition. Previous numerical calculations have yielded spiral phases at low and high pressures, but failed to produce the experimental Q-vector as well as the ferromagnetic phase at intermediate pressures <cit.>. Moreover, the calculated energy differences between both spiral states and the ferromagnetic state are very small. More elaborate studies are needed in order to establish a systematic understanding of the variation of the magnetic orders in the experimental phase diagram.
The electronic structures of MnP have also been investigated recently at ambient pressure <cit.>. Detailed analysis of the optical spectra in comparison with first-principles calculations suggests the existence of two different types of charge carriers with distinct lifetimes. The short lifetime carriers originate from the d_y^2-orbital, exhibit quasi-one-dimensional character due to hybridization with the P p-orbitals, and are apt to order magnetically, whereas the long lifetime carriers consist of other Mn orbitals and are mainly responsible for the charge transport. It has been speculated that the interplay between these two types of carriers may be crucial if superconductivity emerges from the magnetic instability. However, it is not clear if this two-fluid property holds true at high pressures, although it is expected to be a property of the crystal structure which remains unchanged with pressure. A pressure-dependent investigation of the electronic structures is demanding.
In this work, we study the pressure evolution of the magnetic and electronic structures of MnP using first-principles density functional theory (DFT) with both the conventional collinear WIEN2k code <cit.> and the non-collinear WIENNCM code <cit.>. We derive the exchange interactions from collinear calculations and predict correctly the spiral-ferromagnetic-spiral phase transitions with pressure. The results are further compared with non-collinear calculations, which yield all three magnetic phases as a function of pressure. The Q-vector is found to decrease with pressure for the Spi-c phase until it becomes zero (ferromagnetic) and then increase with pressure for the high pressure Spi-b phase, in good agreement with the overall trend observed in neutron experiment <cit.>. We further show the coexistence of anisotropic three-dimensional (3D) Fermi surfaces and quasi-one-dimensional (1D) Fermi surfaces, supporting the existence of two types of charge carriers even at all pressures. The quasi-1D Fermi surfaces only exist in the ferromagnetic state and become gapped in the spiral states. Its interplay with the more itinerant 3D charge carriers may be the key to understand the electron pairing of the superconductivity, in resemblance of those in heavy fermion superconductors <cit.>.
§ COMPUTATIONAL METHODS
The electronic structure calculations were carried out with the WIEN2k <cit.> and WIENNCM <cit.> packages using full-potential linearized augmented plane-wave and local orbital methods. We took the experimental lattice parameters under pressure as shown in figure <ref>(b) but with relaxed internal coordinates <cit.>. The Perdew-Burke-Ernzerhof generalized gradient approximation (GGA) was used for the exchange-correlation functional with 1500 k-point meshes for the whole Brillouin zone <cit.>. The Muffin-tin radii are set to 2.17 a.u. for Mn and 1.93 a.u. for P according to the high pressure structure. For non-collinear calculations using the WIENNCM code, the generalized Bloch wave function of the spiral spin structure takes the form:
ψ_k(𝐫)=e^i𝐤·𝐫([ e^-i𝐐·𝐫/2u_k^↑(𝐫); e^i𝐐·𝐫/2u_k^↓(𝐫) ]),
which takes into account the periodicity of both the crystal and spin structures. The computational time is therefore greatly increased compared to the collinear calculations.
§ THE COLLINEAR CALCULATIONS
Spiral spin structure typically arises from magnetic frustration <cit.>. In MnP-type compounds, it involves three major exchange interactions between neighboring Mn-ions as shown in figure <ref>(a) <cit.>, where J_1 and J^'_1 are the inter-chain exchange interactions and J_2 is the intra-chain exchange interaction along the Mn zigzag chain. The fourth-nearest-neighbor exchange interaction J_3 is found to be two orders of magnitude smaller in our calculations due to the relatively longer Mn-Mn distance and is therefore neglected in the discussions. The magnetic ground state is then determined by two dimensionless ratios, R=J_1/J_2 and R^'=J^'_1/J_2, which gives a theoretical phase diagram in figure <ref>(b) <cit.>. In this simple model, the ferromagnetic (or anti-ferromagnetic) phase and the spiral phase are separated by two hyperbolic curves given by 4RR^'+R+R^'=0 and -4RR^'+R+R^'=0.
To determine the property of the magnetic ground, we therefore need all three exchange interactions <cit.>. We consider four different magnetic structures as shown in table <ref> and use DFT to calculate their respective energy <cit.>. The calculations involve a 1×1×2 supercell in order to obtain two inter-chain exchange interactions. Using the Heisenberg model H=-∑_i,jJ_ijS_i·S_j, we have
E(FF)= -4J_1S^2 - 4J^'_1S^2 - 4J_2S^2 + E_0
E(AF)= -4J_1S^2 - 4J^'_1S^2 + 4J_2S^2 + E_0
E(FA)= 4J_1S^2 + 4J^'_1S^2 - 4J_2S^2 + E_0
E(AC)= -4J_1S^2 + 4J^'_1S^2 + E_0
where E_0 is the referenced energy and S is the obtained magnitude of the Mn-spins for each configuration.
The resulting values of J_1, J^'_1 and J_2 are listed in table <ref> for different pressures. We see that J_1 and J_2 are both ferromagnetic, whereas J^'_1 is antiferromagnetic. The mean-field transition temperature for the ferromagnetic phase is then estimated to be ∼830K <cit.>, which is, as expected, 2-3 times higher than the experimental value of about 292K. In the literature, the origin of the exchange interactions has been ascribed to either the double exchange, the superexchange or the RKKY interactions <cit.>. While we cannot provide a decisive answer to this issue, our DFT calculations do seem to suggest a correlation between the values of these exchanges interactions and their corresponding Mn-P-Mn bond angles. We find that for the ferromagnetic J_1 and J_2 the angles are between 73.4^∘-74.0^∘ and 69.6^∘-70.8^∘, respectively. Both are smaller than 90^∘. While for the antiferromagnetic J^'_1, the Mn-P-Mn bond angle lies between 111.6^∘-112.9^∘ and is larger than 90^∘. This seems to accord well with the Goodenough-Kanamori rule <cit.>, which states a competition between the ferromagnetic double exchange mechanism and the antiferromagnetic superexchange interactions mediated by the P p-orbitals. The nonmonotonic pressure-dependence of J_2 seems to be correlated with the nonmonotonic variation of its Mn-P-Mn bond angle with increasing pressure. In fact, the local minimum of J_2 occurs at around 2.0-2.5 GPa where the corresponding Mn-P-Mn bond angle takes its local maximum (69.8^∘). On the other hand, J_1 increases as its Mn-P-Mn bond angle decreases with increasing pressure below 5 GPa.
The dimensionless ratios, R and R^', are then calculated and compared with the theoretical phase diagram in figure <ref>(c). We find a spiral ground state below 1.0 GPa and above 2.0 GPa, and a ferromagnetic ground state between 1.0 and 2.0 GPa. Above 6 GPa, the magnetic moment of the Mn-ions of all states is found to be suppressed rapidly with pressure, suggesting the transition to a nonmagnetic state <cit.>. The obtained ranges correspond well with the experimental phase diagram shown in figure <ref>(a) <cit.> and seem to be correlated with the nonmonotonic variation of J^'_1 with pressure. This is a very delicate change. Both the low pressure spiral state and the intermediate ferromagnetic state are close to the phase boundary. The ferromagnetic state becomes the ground state when the ratio |R^'|=-J^'_1/J_2 takes its minimum and crosses slightly over the phase boundary given by 4RR^'+R+R^'=0. This suggests that both states can be very sensitive to external perturbation <cit.>. As a matter of fact, μSR experiment has observed the coexistence of the ferromagnetic and spiral states at intermediate pressures <cit.>. On the other hand, the high pressure spiral phase seems stable and locates away from the phase boundary in the R-R^' phase diagram, consistent with the large pressure range of the Spi-b phase <cit.>.
We conclude that the complicated magnetic phase diagram of MnP originates from a delicate competition of the magnetic exchange interactions between neighboring chains. However, we would also like to point out that the simple Heisenberg-type localized spin model is specially designed for the study of the spiral magnetic structure. While it may be valid for describing the basic features of the spiral state, some important factors are obviously missing, including non-Heisenberg-like terms such as the Dzyaloshinsky-Moriya (DM) interaction <cit.> and the itinerant part of the Mn d-electrons <cit.>. The interplay of these terms may become important at a more delicate level and cause some interesting physics such as a topological Hall effect <cit.>. One should therefore be cautious when using the simplified spin model to interpret complicated experimental data. A better model that contains both the localized and itinerant behavior of the Mn d-electrons will be needed in pursuit of a fully satisfactory solution of the MnP physics.
§ THE NON-COLLINEAR CALCULATIONS
The simplified model in the collinear calculations may not apply in the high pressure phase. To determine the detailed structure of the spiral phases with pressure, we performed non-collinear calculations using the WIENNCM code. At ambient pressure, we find that the spin rotation between Mn1 and Mn2 (or between Mn3 and Mn4) is about 21^∘ and that between Mn2 and Mn3 is about 2^∘, both in good agreement with experiment <cit.>. Our pressure-dependent results are plotted in figure <ref>, where we compare the energies as a function of q=|Q| for three spiral structures with different propagation vectors: Spi-a with Q=(q,0,0), Spi-b with Q=(0,q,0), and Spi-c with Q=(0,0,q). We find that the energy of the Spi-c state increases systematically with increasing pressure, whereas that of the Spi-b state decreases with increasing pressure, possibly owing to the rapidly reducing lattice parameter along the b-axis <cit.>. Hence below 1.2 GPa, the Spi-c state has the lowest energy, whereas for pressure above 4 GPa, the Spi-b state has the lowest energy. At intermediate pressures, q reduces to zero and we find a ferromagnetic ground state.
Figure <ref>(f) plots the pressure dependence of q for the ground states. We see that q decreases with increasing pressure in the Spi-c phase, remains zero in a finite pressure range, and then increases with pressure in the Spi-b phase. In neutron experiment, Q is found to be (0, 0, 0.117) at ambient pressure, (0, 0 ,0) at 1.2 GPa, (0, 0.091, 0) at 1.8 GPa and (0, 0.141, 0) at 3.8 GPa <cit.>. While the overall trend agrees well with our theoretical prediction, there seems to be a systematic mismatch in the pressure range, which may be attributed to the numerical discrepancy of the non-collinear calculations. We also note that the high pressure Spi-b state has a much lower energy compared to the ferromagnetic state, in contrast to the small energy difference (only the order of 1 meV) between the Spi-c state and the ferromagnetic state at ambient pressure.
As shown in our collinear calculations, the largest inter- and intra-chain couplings, J_1 and J_2, are both ferromagnetic and much higher in magnitude than other exchange couplings. They connect all the Mn-spins through zigzag paths and intend to form a ferromagnetic network covering the whole lattice. Therefore, the spiral magnetic structure can only result from frustrations introduced by higher-order antiferromagnetic exchange couplings such as J_1^'. The evolution of the q-dependent energy profile at 5 and 6 GPa reflects the effect of increasing frustrations with pressure.
We briefly discuss the possible role of the spin-orbital coupling and the DM interaction in determining the magnetic structures of MnP. Recent neutron scattering experiments at ambient pressure have shown that the DM interaction may indeed play a role, causing a small tilt of the Mn spins from the ab-plane towards the c-axis <cit.> and yielding an unconventional Hall effect of possibly topological nature <cit.>. However, this is a very delicate issue and goes beyond our simple calculations for the basic spiral structures. We have examined the results by including the spin-orbital coupling in our DFT calculations and obtained qualitatively similar pressure-dependence in the Q-vector. The energy differences between E(q) and E(-q) for all spiral states are found to be only the order of 0.5 meV. While this might be the expected magnitude for the DM interaction, it may also arise from numerical errors as we do not find a simple q-dependence expected for the DM energy. All together, our results suggest that the spin-orbital coupling will not change the basic spiral structure of MnP. However, more accurate numerical calculations are needed in order to quantitatively understand its effect on the detailed magnetic structures as suggested by the neutron scattering data.
§ THE ELECTRONIC STRUCTURES
The band structures and Fermi surfaces are calculated for both the spiral and ferromagnetic spin structures and compared in figures <ref> and <ref> for both 0 GPa and 6 GPa <cit.>. In the ferromagnetic case, the Fermi surfaces consist of both flat Fermi sheets and anisotropic 3D-cylindric Fermi surfaces. The flat Fermi sheets come from the Mn-d_y^2 orbital in hybridization with the P 3p-orbitals <cit.>. In the band structures, this corresponds to the two intersecting bands along the Y-S direction. The anisotropic 3D-cylindric Fermi surfaces come from other Mn 3d-orbitals. As discussed previously <cit.>, these special topology of the Fermi surfaces give rise to two different types of charge carriers which coexist in the ferromagnetic state. In the spiral state, the quasi-1D Fermi surfaces are gapped, giving rise to several hole pockets scattered in the Brillouin zone, and the associated charge carriers contribute a major portion to the ordered moments. The gap opening in the spiral phase originates from the hybridization between the spin up and spin down channels caused by the magnetic scattering or the folding of the Brillouin zone associated by the Q-vector. No qualitative change is seen at 6 GPa except that the gap locates slightly above the Fermi energy, which indicates that the quasi-1D charge carriers are only partially gapped in the Spi-b phase.
As proposed in previous theoretical studies <cit.>, the special band crossing along the Y-S direction turns out to be particularly susceptible to non-collinear instabilities. This suggests a close connection between the electronic band structures and the spiral magnetic ground state, indicating that the Mn-d_y^2 orbitals are largely responsible for the spiral instability. For small q=|Q|, the hybridization gap between the spin up and spin down channels is determined roughly by <cit.>
Δ=√((v_k·Q)^2+4V^2),
where v_k is the average velocity of the two spin channels at the wave vector k in the Brillouin zone and V is the hybridization strength given by the off-diagonal elements of the Hamiltonian between the two components of the generalized spinor state in equation (<ref>). The exact form of V may depend on k and Q. For the quasi-1D band along the Y-S line, we have v_k·Q≈ 0 in the Spi-c phase, hence the hybridization gap is roughly given by 2V. The hybridization strength can therefore be estimated to be V ∼ 0.14eV, based on the gap opening at the crossing point as shown in figure <ref>(b). This is the same order of magnitude as the exchange interactions. On the other hand, for the Spi-b phase at 6 GPa, while the gap opening is small at the crossing point slightly above the Fermi energy, there is an overall band shift along the Z-T-Y line if we compare the band structures for the ferromagnetic state in figure <ref>(c) and the Spi-b state in figure <ref>(d). This should also be understood to arise from the hybridization effect. The overall magnitude of the band shift is about 0.2 eV, similar to that estimated for V at 0 GPa.
To see the change of carriers at higher pressures, we also calculate the band structures and Fermi surfaces for the paramagnetic state at 7 GPa and the results are plotted in figure <ref>. It is seen that the quasi-1D bands along the Y-S direction are both shifted away from the Fermi energy so that the quasi-1D carriers will also not contribute to the charge transport as is the case in the spiral states. Neither will it contribute directly to the superconducting condensation. We speculate that the localized d_y^2-electrons will produce magnetic quantum critical fluctuations responsible for the superconducting pairing of other more itinerant 3D-like carriers since they provide a major contribution to the magnetic orders at low pressures. This might resemble the paramagnetic state in the localized regime of the two-fluid model in heavy fermion systems.
We would like to further comment on the origin of the quasi-1D character of the Mn d_y^2-orbital. Our magnetic calculations show that Mn-ions have a magnetic moment of 1.4 μ_B with a d4↑2↓ configuration, indicating a Mn^1+P^1- valence state rather than Mn^3+P^3- <cit.>. The former is in agreement with recent X-ray photoelectron spectroscopy measurements <cit.>. In such a case, the P-ions should be treated as zigzag P-P clustering chains containing [P_2]^2- units with strong P-P interaction, instead of isolated P^3+ anions. We find that it is the P-P anti-bonding states that hybridize with the Mn-Mn bonding states along the b-axis and help to establish the quasi-1D dispersion of the Mn 3d_y^2-bands around the Fermi energy.
§ CONCLUSION
We have investigated the magnetic ground states using both collinear and non-collinear DFT calculations. We obtain both the spiral phases and the ferromagnetic phase as observed in the neutron experiment and predict correctly the pressure-dependence of the propagation vector. Our results indicate that the complicated magnetic phase diagram of MnP may be explained from the delicate competition between neighboring exchange interactions. The resulting electronic structures show characteristic quasi-1D Fermi sheets that become partially gapped in the spiral phase. This is a stable feature of the MnP-type structure and remains robust at high pressures. It suggests a two-fluid scenario which may provide an electronic basis for understanding the pairing mechanism of superconductivity at the border of non-collinear magnetism.
Y.X. thanks R. Laskowski for providing the non-collinear WIENNCM code. This work was supported by the National Natural Science Foundation of China (Nos. 11522435, 11574377), the State Key Development Program for Basic Research of China (2015CB921303, 2014CB921500), and the Strategic Priority Research Program (B) of the Chinese Academy of Sciences (XDB07020200, XDB07020100). Y.Y. was supported by the Youth Innovation Promotion Association CAS. X.C. was supported by the Science Challenge Project (Grant No. JCKY2016212A501) and the NSAF (Grant No. U1430117).
§ REFERENCES
50
Cheng2015
Cheng J G, Matsubayashi K, Wu W, Sun J P, Lin F K, Luo J L and Uwatoko Y 2015 Phys. Rev. Lett. 114 117001
Rice2015
Norman M R 2015 Physics 8 24
Scalapino2012
Scalapino D J 2012 Rev. Mod. Phys. 84 1383
Wu2014
Wu W, Cheng J G, Matsubayashi K, Kong P P, Lin F K, Jin C Q, Wang N L, Uwatoko Y and Luo J L 2014 Nature Commun. 5 5508
Kotegawa2014
Kotegawa H, Nakahara S, Tou H and Sugawara H 2014 J. Phys. Soc. Jpn. 83 093702
Chong2016
Chong X Y, Jiang Y H, Zhou R and Feng J 2016 Sci. Rep. 6 21821
Matsuda2016
Matsuda M et al 2016 Phys. Rev. B 93 100405(R)
Wang2016
Wang Y S, Feng Y J, Cheng J G, Wu W, Luo J L and Rosenbaum T F 2016 Nature Commun. 7 13037
Huber1964
Huber E E, JR. and Ridgley D H 1964 Phys. Rev. A 135 1033
Forsyth1966
Forsyth J B, Pickart S J and Brown P J 1966 Proc. Phys. Soc. 88 333
Komatsubara1970
Komatsubara T, Suzuki T and Hirahara E 1970 J. Phys. Soc. Jpn. 28 317
Yamazaki2014
Yamazaki T, Tabata Y, Waki T, Sato T J, Matsuura M, Ohoyama K, Yokoyama M and Nakamura H 2014 J. Phys. Soc. Jpn. 83 054711
Felcher1966
Felcher G P 1966 J. Appl. Phys. 37 1056
Yano2013
Yano S, Itoh S, Yokoo T, Satoh S, Kawana D, Kousaka Y, Akimitsu J and Endoh Y 2013 J. Magn. Magn. Mater. 347 33
Itoh2014
Itoh S, Yano S, Yokoo T, Satoh S, Kawana D, Kousaka Y, Akimitsu J and Yasuo E 2014 J. Phys.: Conf. Ser. 502 012044
Takeuchi1967
Takeuchi S and Motizuki K 1967 J. Phys. Soc. Jpn. 24 742
Fan2016
Fan G Z, Zhao B, Wu W, Zheng P and Luo J L 2016 Sci. China Phys. Mech. Astron. 59 657403
Khasanov2016
Khasanov R, Amato A, Bonfa P, Guguchia Z, Luetkens H, Morenzoni E, Renzi R De and Zhigadlo N D 2016 Phys. Rev. B 93 180509(R)
Selte1976
Selte K, Kjekshus A, Valde G and Andresen AF 1976 Acta Chem. Scand A30 468
Bonfa2016
Bonfa P, Onuorah I J and Renzi R De 2016 arXiv:1603.08891
Zheng2016
Zheng P, Xu Y J, Wu W, Xu G, Lv J L, Lin F K, Wang P, Yang Y F and Luo J L 2016 arXiv:1607.02853
Blaha
Blaha P, Schwarz K, Madsen G K H, Kvasnicka D and Luitz J 2001 Wien2k, An Augmented Plane Wave plus Local orbital Program for Calculating the Crystal Properties (Austria: Technical University of Wien, ISBN3-9501031-1-2)
Laskowski2004
Laskowski R, Madsen G K H, Blaha P and Schwarz K 2004 Phys. Rev. B 69 140408
Yang2014
Yang Y-F and Pines D 2014 Proc. Natl. Acad. Sci. USA 111 18178
Perdew1996
Perdew J P, Burke K and Ernzerhof M 1996 Phys. Rev. Lett. 77 3865
Kubler2000
Kubler J 2000 Theory of Itinerant Magnetism (Oxford: Oxford University Press)
Elliot1963
Elliott R and Wedgwood F A 1963 Proc. Phys. Soc. 81 846
Moriya1960
Moriya T 1960 Phys. Rev. 120 91
Dzayloshinskii1958
Dzayloshinskii I 1958 J. Phys. Chem. Solids 4 241
Kallel1974
Kallel A, Boller H and Bertaut E F 1974 J. Phys. Chem. Solids 35 1139
Bertaut1962
Bertaut E F 1962 J. Appl. Phys. 33 1138
Gercsi2010
Gercsi Z and Sandeman K G 2010 Phys. Rev. B 81 224426
Anderson1963
Anderson P W 1963 Theory of Magnetic Exchange Interactions: Exchange in Insulators and Semiconductors (New York: Academic Press)
Gribanov1983
Gribanov I F and Zavadskii E A 1983 J. Magn. Magn. Mater. 37 51
Dobrzynski1989
Dobrzynski L and Andresen A F 1989 J. Magn. Magn. Mater. 82 67
Takase1979
Takase A and Kasuya T 1979 J. Phys. Soc. Jpn. 47 491
Goodenough1958
Goodenough J B 1958 J. Phys. Chem. Solids 6 287
Kanamori1959
Kanamori J 1959 J. Phys. Chem. Solids 10 87
Shiomi2012
Shiomi Y, Iguchi S and Tokura 2012 Phys. Rev. B 86 180404(R)
Yanase1980
Yanase A and Hasegawa A 1980 J. Phys. C 13 1989
Grosvenor2005
Grosvenor A P, Wik S D, Cavell R G and Mar A 2005 Inorg. Chem. 44 8988
Lizarraga2004
Lizarraga R, Nordstrom L, Bergqvist L, Bergman A, Sjostedt E, Mohn P and Eriksson O 2004 Phys. Rev. Lett. 93 107205
Tremel1986
Tremel W, Hoffmann R and Silvestre J 1986 J. Am. Chem. Soc. 108 5174
|
http://arxiv.org/abs/1701.07964v8 | 20170127080002 | On the Performance of Practical Ultra-Dense Networks: The Major and Minor Factors | [
"Ming Ding",
"David Lopez-Perez"
] | cs.NI | [
"cs.NI",
"cs.IT",
"math.IT"
] |
On the Performance of Practical Ultra-Dense Networks: The Major and Minor Factors
Ming Ding^^Ming Ding is with Data61, CSIRO, Australia (e-mail:
Ming.Ding@data61.csiro.au). , Member, IEEE,
David López-Pérez^†^†David López-Pérez
is with Nokia Bell Labs, Ireland (email: david.lopez-perez@nokia.com). , Member, IEEE
December 30, 2023
============================================================================================================================================================================================================================================
In this paper,
we conduct performance evaluation for Ultra-Dense Networks (UDNs),
and identify which modelling factors play major roles and minor roles.
From our study, we draw the following conclusions.
First, there are 3 factors/models that have a major impact on the performance of UDNs,
and they should be considered when performing theoretical analyses:
i) a multi-piece path loss model with line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions;
ii) a non-zero antenna height difference between base stations (BSs) and user equipments (UEs);
iii) a finite BS/UE density.
Second, there are 4 factors/models that have a minor impact on the performance of UDNs,
i.e., changing the results quantitatively but not qualitatively,
and thus their incorporation into theoretical analyses is less urgent:
i) a general multi-path fading model based on Rician fading;
ii) a correlated shadow fading model;
iii) a BS density dependent transmission power;
iv) a deterministic BS/user density.
Finally, there are 5 factors/models for future study:
i) a BS vertical antenna pattern;
ii) multi-antenna and/or multi-BS joint transmissions;
iii) a proportional fair BS scheduler;
iv) a non-uniform distribution of BSs;
v) a dynamic time division duplex (TDD) or full duplex (FD) network.
Our conclusions can guide researchers to down-select the assumptions in their theoretical analyses,
so as to avoid unnecessarily complicated results,
while still capturing the fundamentals of UDNs in a meaningful way.
[1536-1276 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. Please find the final version in IEEE from the link: http://ieeexplore.ieee.org/document/7959926/. Digital Object Identifier: 10.23919/WIOPT.2017.7959926]
key words: stochastic geometry, homogeneous Poisson point process
(HPPP), antenna height, dense small cell networks (SCNs), coverage
probability, area spectral efficiency (ASE).
§ INTRODUCTION
Placeholder
Recent market forecasts predict that the mobile data traffic volume density will keep growing towards 2030 and beyond the so-called 1000× wireless capacity demand <cit.>.
This increase is expected to be fuelled by the growth of mobile broadband services,
where high-quality videos, e.g., ultra-high definition and 4K resolution videos, are becoming an integral part of today's media contents.
Moreover, new emerging services such as machine type communications (MTC) and internet of things (IoT) will contribute to the increase of massive data.
This poses an ultimate challenge to the wireless industry,
which must offer an exponentially increasing traffic in a profitable and energy efficient manner.
To make things even more complex,
the current economic situation around the globe aggravates the pressure for mobile operators and vendors to stay competitive,
rendering the decision on how to increase network capacity in a cost-effective manner even more critical.
§.§ The Role of Ultra-Dense Networks in 5G
Previous practice in the wireless industry shows that the wireless network capacity has increased around one million fold from 1950 to 2000,
in which an astounding 2700× gain was achieved through network densification using smaller cells <cit.>.
In the first decade of 2000,
network densification continued to serve the 3rd Generation Partnership Project (3GPP) 4th-generation (4G) Long Term Evolution (LTE) networks,
and is expected to remain as one of the main forces to drive the 5th-generation (5G) networks onward <cit.>.
Indeed, in the first deployment phase of 5G,
the orthogonal deployment
of ultra-dense (UD) small cell networks (SCNs), or simply ultra-dense networks (UDNs), within the existing macrocell network at sub-6 GHz frequencies,
is envisaged as one of the workhorses for capacity enhancement in 5G,
due to its large spatial reuse of spectrum and its easy management.
The latter one arises from its low interaction with the macrocell tier,
e.g., no inter-tier interference <cit.>.
Here, the orthogonal deployment means that small cells and macrocells are operating on different frequency spectrum,
i.e., 3GPP Small Cell Scenario #2a <cit.>.
In contrast,
another way of deploying small cells and macrocells is the co-channel deployment,
where they are operating on the same frequency spectrum,
i.e., 3GPP Small Cell Scenario #1 <cit.>.
§.§ The Performance Analysis of Ultra-Dense Networks
The performance analysis of UDNs, however, is challenging because UDNs are fundamentally different from the current 4G sparse/dense networks,
and thus it is difficult to identify the essential factors that have a key impact on UDN performance.
To elaborate on this,
Table <ref> provides a list of key factors/models/parameters related to the performance analysis of SCNs,
along with their assumptions adopted in the 3GPP.
This list is far from exhaustive,
but it includes those assumptions that are essential in any SCN performance evaluation campaign in the 3GPP <cit.>.
For clarity,
the assumptions in Table <ref> are classified into two categories,
i.e., network scenario (NS) and wireless system (WS).
More specifically,
* The assumptions on the NS
characterise the deployments of base stations (BSs) and user equipments (UEs).
* The assumptions on the WS
characterise the channel models and the transmit/receive capabilities.
Considering the 12 factors listed in Table <ref>,
a straightforward methodology to understand the fundamental differences between UDNs and sparse/dense networks would be to investigate the performance impact of those factors and their combinations one by one,
thus drawing useful conclusions on which factors should define the fundamental behaviours of UDNs.
Since the 3GPP assumptions on those factors were agreed upon by major companies in the wireless industry all over the world,
the more of these assumptions an analysis can consider,
the more practical the analysis will be.
The theoretical community has already started to explore this approach,
and some of those assumptions in Table <ref> have already been considered in various works to derive the performance of UDNs,
e.g., through stochastic geometry (SG) analyses.
In more detail, in SG analyses,
BS positions are typically modelled as a Homogeneous Poisson Point Process (HPPP) on the plane,
and closed-form expressions of coverage probability can be found for some scenarios in single-tier cellular networks <cit.> and multi-tier cellular networks <cit.>.
Using a simple modelling,
the major conclusion in <cit.> is that neither the number of cells nor the number of cell tiers changes the coverage probability in interference-limited fully-loaded wireless networks.
Recently, a few noteworthy studies have been carried out to revisit the network performance analysis of UDNs
under more practical assumptions <cit.>.
These new studies include the following assumptions in Table <ref>:
i) a multi-piece path loss model with line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions (WS 1),
ii) a non-zero antenna height difference between BSs and UEs (WS 2), and
iii) a finite BS/UE density (NS 1).
The inclusion of these assumptions significantly changed the previous conclusion,
indicating that the coverage probability performance of UDNs is neither a convex nor a concave function with respect to the BS density.
The position of our work is marked in Table <ref> and our main conclusions are summarised as follows,
* The performance impact of the 3GPP assumption of WS 1,
i.e., a multi-piece path loss model with line-of-sight (LoS)/non-line-of-sight (NLoS) transmissions:
UDNs imply high probabilities of LoS transmissions between BSs and UEs,
which leads to a performance degradation caused by a faster growth of the interference power compared with the signal power in UDNs <cit.>.
This is due to the transition of a large number of interference paths from NLoS (usually with a large path loss exponent) to LoS (usually with a small path loss exponent).
* The performance impact of the 3GPP assumption of WS 2,
i.e., a non-zero antenna height difference between BSs and UEs:
UDNs make the antenna height difference between BSs and UEs non-negligible,
which gives rise to another performance degradation due to a cap on the signal power,
resulted from the bounded minimum distance between a UE and its serving BS <cit.>.
* The performance impact of the 3GPP assumption of NS 1,
i.e., a finite BS/UE density:
UDNs provide a surplus of BSs with respect to UEs, which promises a performance improvement,
thanks to the BS diversity gain and the BS idle mode operation to mitigate unnecessary inter-cell interference <cit.>.
In more detail,
since the UE density is finite in practical networks,
a large number of BSs in UDNs could switch off their transmission modules and thus enter idle modes, if there is no active UE within their coverage areas.
Setting those BSs to idle modes can mitigate unnecessary inter-cell interference and reduce energy consumption.
This idle mode feature at BSs is referred to as the idle mode capability (IMC) in <cit.>.
§.§ Our Contributions and Paper Structure
In light of such drastic change of conclusions,
one may ask the following intriguing question:
What if we further consider more assumptions in SG analyses?
Will it qualitatively change the recently obtained conclusions in <cit.>?
In this paper, we will address this fundamental question
by investigating the performance impact of the 3GPP assumptions of WS 3, WS 4, WS 5 and NS 2 in Table <ref>.
It is very important to note that we have already excluded two popular NS assumptions from Table <ref>,
i.e., millimetre wave communications and heterogeneous network scenarios.
Our reasons are as follows,
* UDNs do not necessarily have to work with the millimetre wave spectrum,
as such topic stands on its own and requires a completely new analysis <cit.>.
Hence,
in this paper,
we focus on the spatial reuse gain of UDNs at sub-6GHz frequencies,
and we do not consider the new features of millimetre wave communications,
such as short-range coverage,
very low inter-cell interference,
the blockage effect,
a very high Doppler shift <cit.>.
* As briefly discussed in Subsection <ref>,
there are two ways of deploying UDNs within the existing macrocell networks: the orthogonal and the co-channel deployments.
* In the former case,
UDNs and the macrocell networks can be studied separately,
making the network scenario for UDNs a homogeneous one.
* In the latter case,
the strong inter-tier interference <cit.> from the macrocell networks to UDNs should be mitigated by some time domain inter-cell interference coordination techniques,
e.g., the almost blank subframe (ABS) mechanism <cit.>.
Different from a normal subframe,
in an ABS,
no control or data signals but only reference signals are transmitted in the macrocell tier,
thus significantly reducing the inter-tier interference since reference signals only occupy a very limited portion of a time-frequency resource block.
However,
when the small cell network goes ultra-dense,
the macrocell network needs to convert all subframes to ABSs,
so as to decrease its interference to the large number of small cells in its vicinity.
In other words,
the macrocell network would basically mute itself to clear the way for UDNs,
and thus again making the network scenario for UDNs a homogeneous one.
Therefore,
we come to the conclusion that there is no strong motivation to study UDNs in a heterogeneous network scenario,
which is thus absent in Table <ref>.
The rest of this paper is structured as follows:
* The performance impact of the 3GPP assumption of WS 3, i.e.,
a general multi-path fading model based on Rician fading:
In the SG analysis,
the multi-path fading is usually modelled as Rayleigh fading for simplicity.
However, in the 3GPP,
a more practical model based on on generalised Rician fading is widely adopted <cit.>.
* The performance impact of the 3GPP assumption of WS 4, i.e.,
a correlated shadow fading model:
In the SG analysis,
the shadow fading is usually not considered or simply modelled as independently identical distributed (i.i.d.) RVs.
However, in the 3GPP,
a more practical correlated shadow fading is often used <cit.>.
* The performance impact of the 3GPP assumption of WS 5, i.e.,
a BS density dependent BS transmission power:
In the SG analysis,
the BS transmission power is usually assumed to be a constant.
However, in the 3GPP,
it is generally agreed that the BS transmission power should decrease as the SCN densifies because the per-cell coverage area shrinks <cit.>.
* The performance impact of the 3GPP assumption of NS 2, i.e.,
a deterministic BS/UE density:
In the SG analysis,
the BS/UE number is usually modelled as a Poisson distributed random variable (RV).
However, in the 3GPP,
a deterministic BS/UE number is commonly used for a given BS/UE density <cit.>.
Due to the page limitation and the urgent need to speed up the research progress,
so as to quickly identify the major and minor factors in the performance analysis of UDNs,
in this paper we only conduct simulations to show the performance impact of the 3GPP assumptions of the above 4 factors.
Our analytical results will be relegated to the journal version of this work.
Finally, the remaining 3 factors in Table <ref> will be left as our future work.
They are:
* The performance impact of the 3GPP assumption of WS 6, i.e.,
a BS vertical antenna pattern:
In the SG analysis,
the vertical antenna pattern at each BS is usually ignored for simplicity.
However, in the 3GPP,
each BS antenna has a three-dimensional (3D) beam pattern,
and such beam will be electrically tilted downward to improve the signal power as well as reduce the inter-cell interference <cit.>.
* The performance impact of the 3GPP assumption of WS 7, i.e.,
multi-antenna and/or multi-BS joint transmissions:
In the SG analysis,
each BS/UE is usually equipped with one omni-directional antenna for simplicity.
However, in the 3GPP,
it is more than often to conduct multi-antenna transmissions <cit.>,
even with an enhancement of multi-BS cooperation <cit.>.
* The performance impact of the 3GPP assumption of NS 3, i.e.,
a uniform distribution of BSs with some constraints on the minimum BS-to-BS distance:
In the SG analysis,
BSs are usually assumed to be uniformly deployed in the interested network area.
However, in the 3GPP,
it is forbidden to place any two BSs too close to each other <cit.>.
Such assumption is in line with the realistic network planning to avoid strong inter-cell interference.
Although we will not provide results to investigate the above 3 factors in the context of UDNs,
we will discuss about the way-forward of future study at the end of this paper.
* Section <ref> describes the network scenario and the wireless system model used in our SG analysis,
mostly recommended by the 3GPP.
* Section <ref> presents our previous theoretical results in terms of the coverage probability,
and discusses our main results on the performance impact of the 3GPP assumptions of WS 1, WS 2 and NS 1 in Table <ref>.
* Section <ref> describes in more detail the newly considered assumptions addressed in this paper,
i.e., the 3GPP assumptions of WS 3, WS 4, WS 5 and NS 2.
* Section <ref> discloses our simulation results on the performance impact of these newly considered assumptions.
* Section <ref> provides discussion on the performance impact of the remaining 3GPP assumptions of WS 6, WS 7, WS 8, NS 3 and NS 4,
which are left as our future work.
* The conclusions are drawn in Section <ref>.
§ NETWORK SCENARIO AND WIRELESS SYSTEM MODEL
In this section, we present the network scenario and the wireless system model considered in this paper.
Note that most of our assumptions on cellular networks are in line with the recommendations by the 3GPP <cit.>.
§.§ Network Scenario
We consider a cellular network with BSs deployed on a plane according to a homogeneous Poisson point process (HPPP) Φ with a density of λ BSs/km^2. Active UEs are also Poisson distributed in the considered network with a density of ρ UEs/km^2.
We only consider active UEs in the network because non-active UEs do not trigger data transmission,
and thus they are ignored in our analysis.
Note that the total UE number in cellular networks should be much higher than the number of the active UEs,
but at a certain time slot and on a certain frequency band,
the active UEs with data traffic demands may not be too many.
A typical density of the active UEs in 5G should be around 300UEs/km^2 <cit.>.
In practice, a BS should mute its transmission if there is no UE connected to it,
which reduces unnecessary inter-cell interference and energy consumption <cit.>.
Since UEs are randomly and uniformly distributed in the network,
it can be assumed that the active BSs also follow an HPPP distribution Φ̃ <cit.>,
the density of which is denoted by λ̃ BSs/km^2.
Note that 0≤λ̃≤λ,
and a larger ρ leads to a larger λ̃.
Details on the computation of λ̃ can be found in <cit.>.
§.§ Wireless System Model
We denote by r the two-dimensional (2D) distance between a BS and an a UE,
and by L the absolute antenna height difference between a BS and a UE.
Hence, the three-dimensional (3D) distance between a BS and a UE can be expressed as
w=√(r^2+L^2),
where L is in the order of several meters for the current 4G networks.
For example, according to the 3GPP assumptions for small cell networks,
L equals to 8.5m,
as the BS antenna height and the UE antenna height are assumed to be 10 m and 1.5 m, respectively <cit.>.
Following the 3GPP recommendations <cit.>,
we consider practical line-of-sight (LoS) and non-line-of-sight (NLoS) transmissions,
and treat them as probabilistic events.
Specifically, we adopt a very general path loss model,
in which the path loss ζ(w) is segmented into N pieces, i.e.,
ζ(w)=ζ_1(w), when 0≤ w≤ d_1
ζ_2(w), when d_1<w≤ d_2
⋮ ⋮
ζ_N(w), when w>d_N-1,
where each piece ζ_n(w),n∈{ 1,2,…,N}
is modelled as
ζ_n(w)=[ ζ_n^L(w)=A_n^Lw^-α_n^L,; ζ_n^NL(w)=A_n^NLw^-α_n^NL, ] [ LoS Prob: Pr_n^L(w); NLoS Prob: 1-Pr_n^L(w) ],
where
* ζ_n^L(w) and ζ_n^NL(w),n∈{ 1,2,…,N}
are the n-th piece path loss functions for the LoS transmission and the NLoS transmission, respectively,
* A_n^L and A_n^NL
are the path losses at a reference distance r=1 for the LoS and the NLoS cases, respectively,
* α_n^L and α_n^NL
are the path loss exponents for the LoS and the NLoS cases, respectively.
* In practice, A_n^L, A_n^NL, α_n^L and α_n^NL
are constants obtainable from field tests <cit.>.
* Pr_n^L(w) is the n-th piece LoS probability function
that a transmitter and a receiver separated by a distance w has a LoS path,
which is assumed to be a monotonically decreasing function with regard to w.
Such assumption has been confirmed by <cit.>.
For convenience, {ζ_n^L(w)} and {ζ_n^NL(w)} are further stacked into piece-wise functions written as
ζ^Path(w)=ζ_1^Path(w), when 0≤ w≤ d_1
ζ_2^Path(w), when d_1<w≤ d_2
⋮ ⋮
ζ_N^Path(w), when w>d_N-1,
where the string variable Path takes the value of “L” and “NL” for the LoS and the NLoS cases, respectively.
Besides, {Pr_n^L(w)} is also stacked into a piece-wise function as
Pr^L(w)=Pr_1^L(w), when 0≤ w≤ d_1
Pr_2^L(w), when d_1<w≤ d_2
⋮ ⋮
Pr_N^L(w), when w>d_N-1.
As a special case,
in the following subsections,
we consider a two-piece path loss function and a LoS probability function defined by the 3GPP <cit.>.
Specifically,
we use the following path loss function,
ζ(w)=[ A^Lw^-α^L,; A^NLw^-α^NL, ] [ LoS Prob: Pr^L(w); NLoS Prob: 1-Pr^L(w) ],
together with the following LoS probability function,
Pr^L(w)=[ 1-5exp(-R_1/w),; 5exp(-w/R_2), ] [ 0<w≤ d_1; w>d_1 ],
where R_1=156 m, R_2=30 m, and d_1=R_1/ln10 <cit.>.
The combination of the path loss function in (<ref>) and the LoS probability function in (<ref>)
can be deemed as a special case of the proposed path loss model in (<ref>) with the following substitutions:
N=2,
ζ_1^L(w)=ζ_2^L(w)=A^Lw^-α^L,
ζ_1^NL(w)=ζ_2^NL(w)=A^NLw^-α^NL,
Pr_1^L(w)=1-5exp(-R_1/w),
and Pr_2^L(w)=5exp(-w/R_2).
For clarity, this model is referred to as the 3GPP Path Loss Model hereafter.
Moreover,
in this paper, we also assume a practical user association strategy (UAS),
in which each UE is connected to the BS with the smallest path loss
(i.e., with the largest ζ(r)) to the UE <cit.>.
We also assume that each BS/UE is equipped with an isotropic antenna,
and that the multi-path fading between a BS and a UE is modelled as independently identical distributed (i.i.d.) Rayleigh fading <cit.>.
§ DISCUSSION ON THE STATE-OF-THE-ART RESULTS ON THE PERFORMANCE ANALYSIS OF UDNS
Using the theory of stochastic geometry (SG) and the presented assumptions in previous subsections,
we investigated the coverage probability performance of SCNs
by considering the performance of a typical UE located at the origin o.
In such studies <cit.>,
we analysed in detail the performance impact of the 3GPP assumptions of WS 1, WS 2 and NS 1 in Table <ref>.
The concept of coverage probability and a summary of our results are presented in the following.
§.§ The Coverage Probability
The coverage probability is defined as the probability that the signal-to-interference-plus-noise ratio (SINR) of the typical UE is above a designated threshold γ:
p^cov(λ,γ)=Pr[SINR>γ],
where the SINR is computed by
SINR=Pζ(r)h/I_agg+P_N,
where h is the channel gain,
which is modelled as an exponentially distributed random variable (RV) with a mean of one
(due to our consideration on Rayleigh fading presented before),
P is the transmission power at each BS,
P_N is the additive white Gaussian noise (AWGN) power at the typical UE,
and I_agg is the cumulative interference given by
I_agg=∑_i: b_i∈Φ̃∖ b_oPβ_ig_i,
where b_o is the BS serving the typical UE,
b_i is the i-th interfering BS,
β_i is the path loss from b_i to the typical UE,
and g_i is the multi-path fading channel gain associated with b_i.
Note that when all BSs are assumed to be active,
the set of all BSs Φ should be used in the expression of I_agg <cit.>.
However,
in our system model with idle mode at the small cell BSs and a finite UE density (see Subsection <ref>),
only the active BSs in Φ̃∖ b_o inject effective interference into the network,
where Φ̃ denotes the set of the active BSs.
Hence,
the BSs in idle mode are not taken into account in the analysis of I_agg shown in (<ref>),
due to their muted transmissions.
§.§ Summary of Previous Findings
As a summary,
to illustrate our findings in <cit.>,
we plot the SCN performance results in terms of the coverage probability in Fig. <ref>.
The results in Fig. <ref> are analytical ones
validated by simulations in <cit.>.
Note that in Fig. <ref>,
L denotes the absolute antenna height difference between BSs and UEs.
As indicated in Subsection <ref>,
L=8.5 m in the current 3GPP assumption for small cell scenarios,
while L=0 m is a futuristic assumption
where BS antennas are installed at the UE height.
Besides, the other parameters used to obtain the results in Fig. <ref> are
α^L=2.09, α^NL=3.75, A^L=10^-10.38, A^NL=10^-14.54, P=24 dBm, P_N=-95 dBm <cit.>.
From Fig. <ref>,
we can draw the following observations:
* Performance impact of WS 1:
The curve with plus markers represents the results in <cit.>,
where we consider the 3GPP multi-piece path loss function with LoS/NLoS transmissions <cit.> and ignore the 3GPP assumptions of WS 2 and NS 2,
i.e., setting L=0 m and deploying an infinite number of UEs.
From those results,
it can be seen that when the BS density is larger than a threshold around 10^2 BSs/km^2,
the coverage probability will continuously decrease as the SCN becomes denser.
This is because
UDNs imply high probabilities of LoS transmissions between BSs and UEs,
which leads to a performance degradation caused by a faster growth of the interference power compared with the signal power <cit.>.
This is due to the transition of a large number of interference paths from NLoS (usually with a large path loss exponent α^NL) to LoS (usually with a small path loss exponent α^L).
* Performance impact of WS 2:
The curve with square markers represents the results in <cit.>,
where we consider the 3GPP assumptions of multi-piece path loss function with LoS/NLoS transmissions and L=8.5 m <cit.>,
while still keeping the assumption of an infinite number of UEs.
From those results,
it can be seen that the coverage probability shows a concerning trajectory toward zero when the BS density is larger than 10^3 BSs/km^2.
This is because
UDNs make the antenna height difference between BSs and UEs non-negligible,
which gives rise to another performance degradation due to a cap on the signal power,
resulted from the bounded minimum distance between a UE and its serving BS <cit.>.
* Performance impact of NS 1:
The curves with circle and triangle markers represents the results in <cit.>,
where we consider the 3GPP assumptions of multi-piece path loss function with LoS/NLoS transmissions and a finite UE density of 300UEs/km^2 (a typical
UE density in 5G <cit.>).
Moreover, both the assumptions of L=0 m and L=8.5 m are investigated in Fig. <ref>.
As we can observe,
when the BS density surpasses the UE density, i.e., 300BSs/km^2,
thus creating a surplus of BSs,
the coverage probability will continuously increase.
Such performance behaviour of the coverage probability increasing in UDNs is referred to as the Coverage Probability Takeoff in <cit.>.
The intuition behind the Coverage Probability Takeoff is that
UDNs provide a surplus of BSs with respect to UEs,
which provide a performance improvement,
thanks to the BS diversity gain and the BS idle mode operation.
In more detail,
as discussed in <ref>,
since the UE density is finite in practical networks,
a large number of BSs could switch off their transmission modules in a UDN,
thus enter idle modes,
if there is no active UE within their coverage areas.
This helps to mitigate unnecessary inter-cell interference and reduce energy consumption.
§ DISCUSSION ON THE INCORPORATION OF MORE 3GPP ASSUMPTIONS INTO THE MODELING
To investigate whether the previous conclusions still hold in more practical network scenarios,
additional assumptions <cit.> will also be considered in our analysis through simulations.
The results will be discussed in Section <ref>.
Such additional practical assumptions are the 3GPP assumptions of WS 3, WS 4, WS 5 and NS 2 in Table <ref>,
which are presented in the sequel.
Note that the authors of <cit.> have recently proposed a new approach of network performance analysis based on HPPP intensity matching,
which facilitates the theoretical study of some of these additional 3GPP assumptions.
§.§ A general multi-path fading model based on Rician fading (WS 3 in Table <ref>)
In SG analyses,
the multi-path fading is usually modelled as Rayleigh fading for simplicity.
However, in the 3GPP,
a more practical model based on generalised Rician fading is widely adopted <cit.>.
Hence,
we consider the practical multi-path Rician fading model defined in the 3GPP <cit.>,
where the K factor in dB scale
(the ratio between the power in the direct path and the power in the other scattered paths)
is modelled as K[dB]=13-0.03w,
where w is defined in (<ref>).
§.§ A correlated shadow fading model (WS 4 in Table <ref>)
In SG analyses,
the shadow fading is usually not considered or simply modelled as independently identical distributed (i.i.d.) RVs.
However, in the 3GPP,
a more practical correlated shadow fading is often used <cit.>.
Hence,
we consider the practical correlated shadow fading model defined in 3GPP <cit.>,
where the shadow fading in dB is modelled as zero-mean Gaussian a random variable,
e.g., with a standard deviation of 10dB <cit.>.
The correlation coefficient between the shadow fading values associated with two different BSs is denoted by τ,
where τ=0.5 in <cit.>.
§.§ A BS density dependent BS transmission power (WS 5 in Table <ref>)
In SG analyses,
the BS transmission power is usually assumed to be a constant.
However, in the 3GPP,
it is generally agreed that the BS transmission power should decrease as the SCN densifies because the per-cell coverage area shrinks <cit.>.
Hence,
we embrace the practical self-organising BS transmission power framework presented in <cit.>,
in which P varies with the BS density λ.
Specifically, the transmit power of each BS is configured such that it provides a signal-to-noise-ratio (SNR) of η_0=15 dB at the edge of the average coverage area for a UE with NLoS transmissions.
The distance from a cell-edge UE to its serving BS with an average coverage area is calculated by r_0=√(1/λπ),
which is the radius of an equivalent disk-shaped coverage area with an area size of 1/λ.
Therefore, the worst-case path loss is given by A^NLr_0^-α^NL
and the required transmission power to enable a η_0 dB SNR in this case can be computed as <cit.>
P(λ) = 10^η_0/10P_N/A^NLr_0^-α^NL.
In Fig. <ref>,
we plot the BS density dependent transmission power in dBm to illustrate this realistic power configuration when η_0=15 dB.
Note that our modelling of P is practical,
covering the cases of macrocells and picocells recommended in the LTE networks.
More specifically, the typical BS densities of LTE macrocells and picocells are respectively several BSs/km^2 and around 50 BSs/km^2 <cit.>, respectively.
As a result,
the typical P of macrocell BSs and picocells BSs are respectively assumed to be 46 dBm and 24 dBm in the 3GPP standards <cit.>,
which match well with our modelling.
§.§ A deterministic BS/UE density (NS 2 in Table <ref>)
In SG analyses,
the BS/UE number is usually modelled as a Poisson distributed random variable (RV).
However, in the 3GPP,
a deterministic BS/UE number is commonly used for a given BS/UE density <cit.>.
Hence,
we use deterministic densities λ BSs/km^2 and ρ UEs/km^2 to characterize the BS and UE deployments, respectively,
instead of modelling their numbers as Poisson distributed RVs.
§ MAIN RESULTS ON THE PERFORMANCE IMPACT OF THE ADDITIONAL 3GPP ASSUMPTIONS
On top of the 3GPP assumptions discussed in Subsection <ref>,
in this section,
we consider the 4 additional 3GPP assumptions described in Section <ref>,
and study their performance impacts on UDNs.
More specifically,
using the same parameter values for Fig. <ref>,
we conduct simulations to investigate the coverage probability performance of SCNs while also considering the 3GPP assumptions of WS 3, WS 4, WS 5 and NS 2 in Table <ref>.
The results are plotted in Fig. <ref>.
Comparing Fig. <ref> with Fig. <ref>,
we can draw the following observations:
* Those 4 additional 3GPP assumptions introduced in Section <ref>
do not change the fundamental behaviours of UDNs shown in Fig. <ref>,
i.e.,
* the performance degradation
due to the transition of a large number of interfering links from NLoS to LoS,
when λ∈[10^2,10^3] BSs/km^2,
* the further performance degradation,
due to the cap on the signal power caused by the non-zero antenna height difference between BSs and UEs,
when L=8.5 m and λ is larger than 10^3 BSs/km^2, and
* the performance improvement
when λ is larger than ρ,
thus creating a surplus of BSs and thus allowing for idle mode operation to mitigate unnecessary inter-cell interference.
* The performance behaviour of sparse networks (λ∈[10^-1,10^0] BSs/km^2) is different in Fig. <ref> compared with
that in Fig. <ref>.
This is mainly due to the larger BS transmission power used in Fig. <ref> for sparse networks,
as displayed in Fig. <ref>,
which is helpful to remove coverage holes in the noise-limited region.
Based on our knowledge of the successful operation of the existing 2G/3G systems,
the results in Fig. <ref> make more sense than those in Fig. <ref>,
since the macrocell BS transmission power in the 2G/3G systems is indeed much larger than 24 dBm,
which is the case for Fig. <ref>.
Nevertheless, such BS density dependent transmission power has a minor impact on UDNs,
because BSs in UDNs usually work in a interference-limited region,
and thus the BS transmission power in the signal power and that in the aggregated interference power cancel out each other.
This is obvious from the SINR expression in (<ref>).
§ FUTURE WORK
The performance impact of the remaining 5 assumptions in Table <ref>,
i.e., the 3GPP assumptions of WS 6, WS 7, WS 7, NS 3 and NS 4,
are left to our future work, but briefly discussed in the following subsections.
§.§ Discussion on The Remaining 5 Assumptions
* A 3D antenna pattern (WS 6 in Table <ref>):
In SG analyses,
the vertical antenna pattern at each BS is usually ignored for simplicity.
However, in the 3GPP performance evaluations,
it is of good practice to consider 3D antenna patterns,
where the main beam is mechanically and/or electrically tilted downwards to improve the signal power as well as to reduce inter-cell interference <cit.>.
* Multi-antenna and/or multi-BS joint transmissions (WS 7 in Table <ref>):
In SG analyses,
each BS/UE is usually equipped with one omni-directional antenna for simplicity.
However, in the 3GPP performance evaluations,
it is usual to consider multi-antenna transmissions and receptions <cit.>,
even with an enhancement of multi-BS cooperation <cit.>.
The consideration of multi-antenna technologies in small cell networks expands the realm of UDNs,
which opens up new avenues of research topics for further study.
* A proportional fair scheduler (WS 8 in Table <ref>):
In SG analyses,
usually a typical UE is randomly chosen for the performance analysis,
which implies that a round Robin (RR) scheduler is employed in each BS.
However,
in the 3GPP performance evaluations,
a proportional fair (PF) scheduler is often used as an appealing scheduling technique that can offer a better system throughput than the RR scheduler,
while maintaining the fairness among UEs with diverse channel conditions <cit.>.
Some preliminary simulation results on the performance of small cell networks considering the RR scheduler can be found in<cit.>.
* A non-uniform distribution of BSs with some constraints on the minimum BS-to-BS distance (NS 3 in Table <ref>):
In SG analyses,
BSs are usually assumed to be uniformly deployed in the interested network area.
However, in the 3GPP performance evaluations,
small cell clusters are often considered,
and it is forbidden to place any two BSs too close to each other <cit.>.
Such assumption is in line with the realistic network planning to avoid strong inter-cell interference.
It is interesting to note that several recent studies are looking at this aspect from difference angles <cit.>.
In particular,
a deterministic hexagonal grid network model <cit.> might be useful for the analysis.
More specifically,
we can construct an idealistic BS deployment on a perfect hexagonal lattice,
and then we can perform a network analysis on such BS deployment to extract an upper-bound of the SINR performance.
Note that the BS deployment on a hexagonal lattice leads to an upper-bound performance because BSs are evenly distributed in the network scenario,
and thus very strong interference due to close proximity is precluded in the analysis <cit.>.
Illustration of such hexagonal grid network is provided in Fig. <ref>.
* A dynamic time division duplex (TDD) or full duplex (FD) network (NS 4 in Table <ref>):
In SG analyses,
most studies focus on the downlink (DL) network scenario as in <cit.>.
It is of great interest to see whether new conclusions can be drawn for the uplink (UL) network scenario.
Different from the DL,
a fractional power control mechanism is commonly used at the UE side <cit.>.
Moreover,
a new technology,
referred to as dynamic TDD,
has been standardized in the 3GPP <cit.>.
In dynamic TDD,
the DL/UL subframe number in each cell or a cluster of cells can be dynamically changed on a per-frame basis,
i.e., once every 10 milliseconds <cit.>.
Thus,
dynamic TDD can provide a tailored configuration of DL/UL subframe resources for each cell or a cluster of cells at the expense of allowing inter-cell inter-link interference,
e.g., DL transmissions of a cell may interfere with UL ones of a neighboring cell,
and vice versa.
The study of dynamic TDD is particularly important for UDNs because dynamic TDD is the predecessor of full duplex (FD) <cit.> technology,
which has been identified as one of the candidate for 5G.
In more detail,
* In an FD system, a BS can simultaneously transmit to and receive from different UEs,
thus enhancing spectrum reuse, but creating both inter-cell inter-link interference and intra-cell inter-link interference,
a.k.a., self-interference <cit.>.
* The main difference between an FD system and a dynamic TDD one is that self-interference does not exist in dynamic TDD <cit.>.
The inter-link interference is expected to have a non-minor impact on the performance of UDNs in the context of a dynamic TDD or FD network,
because the DL generally overpowers the UL,
which creates high imbalance among interference links in UDNs.
§.§ Qualitative Results vs. Quantitative Results
Finally,
it is very important to point out that even if an analysis can treat all the assumptions in Table <ref>,
a non-negligible gap may still exist between the analytical results and the performance results in reality,
because of the following non-tractable factors <cit.>:
* non-full-buffer traffic,
* hybrid automatic repeat request (HARQ) processes,
* non-linear channel measurement errors,
* quantized channel state information (CSI),
* UE misreading of control signalling,
* discrete modulation and coding schemes,
* UE mobility and handover procedures,
* imperfect backhaul links, and so on;
Hence,
in the context of the performance analysis for UDNs,
a high priority should be given to identifying the performance trends qualitatively,
rather than improving the numerical results quantitatively.
§ CONCLUSION
In this paper,
we have conducted a performance evaluation of UDNs,
and identified which modelling factors really matter in theoretical analyses.
From our study,
we have identified that 3 factors/models have a major impact on the performance of UDNs,
and they should be considered when performing theoretical analyses:
* a multi-piece path loss model with LoS/NLoS transmissions;
* a non-zero antenna height difference between BSs and UEs;
* a finite BS/UE density.
In contrast,
we have found that the following 4 factors/models have a minor impact on the performance of UDNs,
i.e., change the results quantitatively but not qualitatively,
and thus their incorporation into theoretical analyses is less urgent:
* a general multi-path fading model based on Rician fading;
* a correlated shadow fading model;
* a BS density dependent transmission power;
* a deterministic BS/user density.
Finally, there are 5 factors/models for future study:
* a BS vertical antenna pattern;
* multi-antenna and/or multi-BS joint transmissions;
* a proportional fair BS scheduler;
* a non-uniform distribution of BSs;
* a dynamic TDD or FD network.
Our conclusions can guide researchers to down-select the assumptions in their theoretical analyses,
so as to avoid unnecessarily complicated results,
while still capturing the fundamentals of UDNs in a meaningful way.
IEEEtran
|
http://arxiv.org/abs/1701.07703v1 | 20170126135355 | 3D Printing of Polymer Bonded Rare-Earth Magnets With a Variable Magnetic Compound Density for a Predefined Stray Field | [
"Christian Huber",
"Claas Abert",
"Florian Bruckner",
"Martin Groenefeld",
"Stephan Schuschnigg",
"Iulian Teliban",
"Christoph Vogler",
"Gregor Wautischer",
"Roman Windl",
"Dieter Suess"
] | physics.ins-det | [
"physics.ins-det",
"cond-mat.mtrl-sci",
"physics.comp-ph",
"physics.pop-ph"
] |
Correspondence to: mailto:christian.huber@tuwien.ac.atchristian.huber@tuwien.ac.at
Institute of Solid State Physics, Vienna University of Technology, 1040 Vienna, Austria
Christian Doppler Laboratory for Advanced Magnetic Sensing and Materials, 1040 Vienna, Austria
Institute of Solid State Physics, Vienna University of Technology, 1040 Vienna, Austria
Christian Doppler Laboratory for Advanced Magnetic Sensing and Materials, 1040 Vienna, Austria
Institute of Solid State Physics, Vienna University of Technology, 1040 Vienna, Austria
Christian Doppler Laboratory for Advanced Magnetic Sensing and Materials, 1040 Vienna, Austria
Magnetfabrik Bonn GmbH, 53119 Bonn, Germany
Department of Polymer Engineering and Science, Montanuniversitaet Leoben, 8700 Leoben, Austria
Magnetfabrik Bonn GmbH, 53119 Bonn, Germany
Institute of Solid State Physics, Vienna University of Technology, 1040 Vienna, Austria
Institute of Solid State Physics, Vienna University of Technology, 1040 Vienna, Austria
Christian Doppler Laboratory for Advanced Magnetic Sensing and Materials, 1040 Vienna, Austria
Institute of Solid State Physics, Vienna University of Technology, 1040 Vienna, Austria
Christian Doppler Laboratory for Advanced Magnetic Sensing and Materials, 1040 Vienna, Austria
Institute of Solid State Physics, Vienna University of Technology, 1040 Vienna, Austria
Christian Doppler Laboratory for Advanced Magnetic Sensing and Materials, 1040 Vienna, Austria
Additive manufacturing of polymer bonded magnets is a recently developed technique, for single-unit production, and for structures that have been impossible to manufacture previously. Also new possibilities to create a specific stray field around the magnet are triggered. The current work presents a method to 3D print polymer bonded magnets with a variable magnetic compound density distribution. A low-cost, end-user 3D printer with a mixing extruder is used to mix permanent magnetic filaments with pure PA12 filaments. The magnetic filaments are compounded, extruded, and characterized for the printing process. To deduce the quality of the manufactured magnets with a variable compound density, an inverse stray field framework is used. The effectiveness of the printing process and the simulation method is shown. It can also be used to manufacture magnets that produce a predefined stray field in a given region. Examples for sensor applications are presented. This setup and simulation framework allows the design and manufacturing of polymer bonded permanent magnets which are impossible to create with conventional methods.
3D Printing of Polymer Bonded Rare-Earth Magnets With a Variable Magnetic Compound Density for a Predefined Stray Field
D. Suess
December 30, 2023
=======================================================================================================================
§ INTRODUCTION
Additive manufacturing is an affordable, rapid technique to manufacture models, tools, prototypes, or end products. The production is carried out directly from formless (liquids, powders, etc.) or form-neutral (tape, wire) material mostly by means of thermal or chemical processes. No specific tools are required for a specific object with a possibly complex shape. A well established additive manufacturing method is the fused deposition modeling (FDM) technology. FDM, also referred to as 3D printing, is a process that uses wire-shaped thermoplastic filaments. The filament is heated just above its softening point with the aid of a moving heated extruder. Molten thermoplastic is pressed out of the printer head nozzle and builds up the object layer by layer on the already solidified material on the building platform <cit.>. Since 3D printers are nowadays affordable for end-users, a boom of new possibilities have been triggered <cit.>. 3D print technology is a fast growing field for single-unit production, and it allows to produce structures that have been difficult or impossible to build before.
NdFeB magnets are mainly divided into sintered and polymer bonded magnets. On the one hand, sintered magnets have the highest maximum energy product (BH)_max, on the other hand polymer bonded magnets enable the manufacturing of complex shapes and magnetization structures, but with a lower (BH)_max <cit.>. Therefore, they are widely used wherever product cost is a major consideration over magnetic performance <cit.>. Bonded magnets offer a wide application range from sensor to actuator applications <cit.>.
Polymer bonded magnets are composites with permanent-magnet powder embedded in a polymer binder matrix. Hard magnetic particles, ferrite (e.g. Sr, Ba), and rare-earth materials (e.g. NdFeB) with a volume filler content between 40 – 65 vol.% are inserted. These compounds can be further processed with injection molding or extrusion <cit.>. The NdFeB particles for the compounds are produced by a melt spinning process. To achieve better rheological behavior, spherical particles are preferred, which can be produced by an inert gas atomization process. To reduce assembling costs and reach more flexibility, magnetically isotropic powder is preferred. The high filler content increases the viscosity of the melted compound <cit.>. To avoid clogging of the nozzle, the matrix polymer should be of a high flowable material, good mechanical properties are an important aspect, too. Polyamide (PA6, PA11, and PA12) have a good combination of these qualities.
Recently it was shown that an end-user 3D printer can be used to print polymer bonded rare-earth magnets with a complex shape <cit.>. A prefabricated magnetic compound (Neofer ® 25/60p) from Magnetfabrik Bonn GmbH has been used. It consists of 90 wt.% NdFeB grains in a PA11 matrix. The effectiveness of this printing method is demonstrated by fabrication of a magnet with a complex shape that is known to produce a specific stray field above the printed magnet. Structures with a size of under 0.8 mm, and a layer height of under 0.1 mm are possible. Contrary to the well established, affordable, accurate, and high resolution end-user 3D printing technology, big area additive manufacturing (BAAM) of large scale NdFeB magnets is presented in <cit.>. The BAAM method operates within the same principle as a conventional 3D printer. An advantage of this method is the possibility to manufacture large scale objects, disadvantages are the high system costs and printing of fine structures is impossible due to the large printer nozzle size.
However, at the moment no other single-unit manufacturing technologies are available for the production of magnets with complex shapes, as well as the opportunity to fabricate objects without material waste, and a minimum amount of source material. This can be an important aspect for the reduction of rare-earth elements in permanent magnets <cit.>.
In this work, a method to produce polymer bonded permanent magnets with a variable magnetic compound density along the printing direction is presented. The magnetic powder filler fraction ϱ_M is proportional to the remanence B_r. This can be used to shape the magnetic field without changing the topology of the object. First, the effectiveness of the method is shown. Furthermore an inverse stray field method based on finite elements has been developed, which allows to deduce the compound density and magnetization distribution of the magnet from stray field measurements. This method can be used to evaluate the quality of the printed magnets. Moreover, the inverse method allows us to find an optimal magnetization density distribution for a given target field.
§ RESULTS
§.§ Predefined Magnetic Compound Density
A mixing extruder of an end-user 3D printer has the possibility to mix two or more materials during the printing process. In this article the mixing extruder is used to mix magnetic compound material with pure commercial PA12. The magnetic compound consist of 85 wt.% NdFeB particles inside a PA12 matrix. Commercial magnetically isotropic powder MQP-S-11-9 with the chemical composition NdPrFeCoTiZrB from Magnequench Corporation is used. The powder is produced by employing an atomization process followed by heat treatment. The particles are of spherical morphology with a diameter of approximately 45±20 µm (Supplement Fig. 1). The magnetic compound is extruded into suitable filaments with a diameter of 1.75±0.1 mm and a magnetic filler content of 85 wt.% and 43 vol.%, respectively.
The mixing extruder can continuously change between both materials. The magnetic compound density is a function of the layer number and y-axis r_y, respectively. To determine the magnetic properties of the prints with different magnetic filler fractions, hysteresis measurements are performed and pictured in Fig. <ref>(a). Volumetric mass density measurements yield ϱ=3.2 g/cm^3 for the maximum magnetic filler fraction of ϱ_m=100 %. This is 15 % lower as the theoretical density of the compound. The compound exhibits a remanence B_r=314 mT and a coercivity H_cj=745 kA/m. However, the remanence B_r decreases linearly with the magnetic compound density ϱ_m (Fig. <ref>(b)). This means that the maximum energy product ((BH)_max∼ B^2) is proportional to ϱ_m^2. To benchmark the variable magnetic compound printing method, a cuboid of size 10×40×10 mm^3 (L×W×H) with an absolute value magnetic density function (ϱ_m=100 %/(W/2)|r_y| %) is printed (Fig. <ref>(c)). The sample is magnetized inside an electromagnet with 1.9 T along the z-axis. A volume scan of the produced stray field above and under the magnetized cuboid is pictured in Fig. <ref>(d) <cit.>. This measurement will be used to reconstruct the magnetization distribution inside the magnet and therefore, deduct the quality of the printed magnet.
§.§ Inverse Problem
The forward stray field computation problem is defined by finding the stray field for a given magnetization. Well established finite element method (FEM) algorithms for the stray field calculation of permanent magnets exists <cit.>. In contrast to the forward problem, the inverse problem, where for a given magnetic field outside the magnet the magnetization within the magnet is reconstructed, is much harder to solve (Supplementary Fig. 2). The inherently difficulty of this inverse problem is due to the facts that (i) the inverse problem is not unique (ii) the underlying system of equations is ill-conditioned.
Mostly, no unique solution is available for these kind of problems. A method to solve the inverse problem by using an adjoint method exists <cit.>. In this article, a pure FEM method based on the FEM library FEniCS <cit.>, and the library dolfin-adjoint <cit.> for the automatically derivation of the adjoint equation of a given forward problem. Dolfin-adjoint contains a framework to solve partial differential equation (PDE) constraint optimization problems.
The forward problem is a well-posed problem. This means a solution exists and it is unique. As above-mentioned, the inverse problem is ill-posed. To provide an approximated solution of the inverse problem additional informations are necessary. Different methods exists to find reasonable results <cit.>. Here, the Tikhonov regularization is implemented in the inverse stray field computation framework. Solving the following minimization problem results in the unknown magnetization M⃗ for each finite element of the model in the region Ω_m (Supplementary Fig. 3):
min_M⃗(∫_Ω_hh⃗_⃗s⃗i⃗m⃗-h⃗_⃗e⃗x⃗p⃗^2 dr⃗ + α∫_Ω_m∇M⃗^2 dr⃗_regularization)
where h⃗_⃗s⃗i⃗m⃗ is the stray field calculated by the forward problem in a defined region Ω_h, with the magnetic potential u. h⃗_⃗e⃗x⃗p⃗ is the measured or target stray field in the same region Ω_h. α⩾ 0 is the Tikhonov regularization parameter. In this case α has units m^2.
The main challenge for this regularization is the proper choose of a suitable parameter α. If α is too small the solution will be dominated by the contributions from the data errors. If α is too large the solution is a poor approximation of the original problem. A well-known method to find an optimal α, is the so-called L-curve method <cit.>. For this method the solution norm ∇M⃗^2_2 is plotted over the residual norm h⃗_⃗s⃗i⃗m⃗ - h⃗_⃗e⃗x⃗p⃗^2_2 in a log-log scale for varying α∈ [0,∞). The optimal residual parameter α is where the curve has the maximum curvature (corner of the L-curve). This α value gives a good compromise between the change of the residual norm and reduction of the solution norm (Supplementary Fig. 4). To solve the minimization problem in Eq. <ref>, the IPOPT software library for large scale nonlinear optimization systems is used <cit.>.
§.§ Reconstructed Magnetization
To benchmark the inverse stray field framework, and deduce the quality of the 3D printed magnetic cuboid with an absolute value magnetic density distribution along the y-axis the printed and magnetized magnet is scanned on both sides in a volume of 40×12×2 mm^3 (L×W×H) with a spatial resolution of 0.2 mm in the magnetization direction r_z (Fig. <ref>(d)). The measured h⃗_⃗e⃗x⃗p⃗ is the input for the inverse stray field calculation. The simulation is performed for a range of different Tikhonov regularization parameters α=10^x m^2 with x ∈ [-9.4,3] and a step size of 0.4. Fig. <ref>(a) shows the L-curve with the different α values, and the optimal solution with α_opt=6.4·10^-3 m^2. Regarding the error of the measurement, the L-curve looks different to the ideal one (Supplementary Fig. 4). However, the criteria for an optimal α is obviously fulfilled. Fig. <ref> (b) illustrates the magnetization distribution M_z, which is proportional to the magnetic density distribution inside the magnet. A line scan 1.5 mm above the magnet compared with the simulation results are shown in Fig. <ref>(c). It points out a good agreement between measurement and results from the inverse stray field calculation. The distribution along the y-axis in the middle of the magnet is plotted in Fig. <ref>(d). The reconstructed magnetization M_z_inv fits very well with the ideal magnetization distribution of M_ideal=M_max/(W/2)|r_y| mT or ϱ_m=100 %/(W/2)|r_y| % for the magnetic density distribution. Where M_max is the maximum magnetization and W is the width of the magnet. The reconstructed components M_x_inv and M_y_inv are small compared to the z component. This complies with the expectations of the printed permanent magnet. A supplementary animation shows the change of the magnetization distribution and the resulting stray field at different Tikhonov regularization parameters α. If α→ 0, the magnetic compound density ϱ_m distribution is unphysical, but the stray field fits with the measurement data. If α→∞, M_z_inv=1 mT for the whole magnetic region and therefore, the stray field above the magnet mismatch with the measurement data.
§.§ Predefined Stray Field
Instead of using the inverse stray field method to investigate already printed magnets, the method can also be used to design magnets with a specific stray field properties. As examples we compute optimal magnetization distribution for a hollow cylinder geometry for different target fields inside the cylinder. The hollow cylinders have the dimension in mm ∅25, ∅20, 50 (d_outer, d_inner, L) with a linear and a constant stray field distribution inside the hollow magnet. Fig. <ref>(a) shows the model of the magnet with the magnetic region Ω_m and the region for the predefined stray field Ω_h. The printing direction is along the z-axis. For this reason, the variable h⃗_⃗e⃗x⃗p⃗ in Eq. <ref> does not represent the measurement data but rather the desired stray field in distributions Ω_h. M_x and M_y is fixed to zero, and the maximum of M_z is limited to the used magnetic material.
Otherwise the real printed magnet can not reach the desired magnetization. Two different stray field distributions are tested. The first is a constant magnetic flux density of B_z=5.5 mT along the z-axis r_z ∈ [10,40] mm, the second one is a linear increasing field of B_z=2+0.15r_z mT/mm along the z-axis r_z ∈ [10,40] mm. A constant magnetic field inside a hollow cylinder can be used to calibrate sensors where the sensor position is changing. A linear increasing field can be used to realize a linear positioning system. In this case, a 1D sensor is enough for an accurate position detecting system <cit.>. The inverse stray field simulation for both examples are performed for various Tikhonov regularization parameter α=10^x m^2 with x ∈ [-10,1]. The L-curve for both simulations are presented in Fig. <ref>(e). α_opt is clearly visible and is marked in green (α_opt=2.5·10^-7 m^2 for both designs). The resulting magnetic density distribution along the z-axis is plotted in Fig. <ref>(b). Fig. <ref>(c) shows the comparison between simulations and measurements in the middle of the hollow cylinders. Inside the field boxes with the dimensions ∅2, 30 mm (d, L) a good conformity between printed and simulated magnets is given.
The error between measurements and simulations are plotted in Fig. <ref>(a). The error changes along the z-axis and it is around 6 % for the constant and 4 % for the linear design. An other important feature of this magnetic design is the independence of eccentric measurements along the z-axis. Fig. <ref>(b) shows a plot of the error of eccentric measurements (r=2.5 mm) of B_z at three different planes (r_z=15, 25, 35 mm) inside the hollow cylinder. The error is lower than 2 %. A picture of one of the printed magnet is presented in Fig. <ref>(d).
§ DISCUSSION
Additive manufacturing of polymer bonded magnets has the advantage to manufacture magnets with a minimum of cost and time. This article presents a method to 3D print permanent magnets with a variable compound density distribution along the printing direction. With a commercially available end-user 3D printer and a mixing extruder, polymer bonded magnetic filament can be mixed with a pure PA12 filament. Hysteresis measurements with different filler fractions are performed to get a relation of remanence and compound density. The remanence decreases linearly with compound density.
To deduct the quality of the prints, an inverse stray field simulation framework is developed. No unique solution exists for this kind of inverse problem. Therefore, Tikhonov regularization is used to find reasonable results for the optimization problem. A cuboid with an absolute value function of the magnetic density is printed and the inverse stray field code is benchmarked with some measurements.
The inverse stray field code can be used to simulate magnets with a predefined target stray field in a given region outside of the magnet. The optimal magnetization density of a hollow cylinder for a targeted constant or linear stray field is computed. The magnetic compound density distribution along the z-axis is optimized and printed with our setup. Detailed stray field measurements shows an excellent agreement between simulation and measurement. Eccentric stray field measurements shows a low dependence of the sensor position. This is an important aspect for linear position measurements.
With this setup and simulation framework the manufacturing of magnets are possible which are impossible to create with conventional methods. It can be used to create magnets with a specific stray field distribution for various applications. The simulation method can also be also used to improve the performance of multipolar polymer bonded magnets by injection molding.
At the moment, only prints with a variable magnetic compound density along the z-axis are possible. This restriction should be rescinded by an improved slicing program. A big influence on the quality of the printed structures has the filament diameter, because with a constant feeding rate the volume flow trough the nozzle varied which leads to a patchy printing result. Here is a potential for improvement to reduce the error between simulation and measurement. Also studies of how the volumetric mass density of the printed magnets can be improved are subject to further research.
§ METHODS
§.§ Printing/Simulation Models
The models for the 3D printing process and the simulation results are created in Salome 7.6. Meshing of the simulation models are performed in Salome 7.6 with the Netgen algorithm and tetrahedron elements (Supplementary Fig. 3) <cit.>. Converting the .med Salome output file to the FEniCS .xml format is performed with Gmsh 2.10.1. For the manufacturing of the objects the STL data from Salome are sliced using the Slic3r software. The resulting G-code was further modified by a customized Python script to print objects with a layer depended magnetic compound density distribution.
§.§ Stray Field Simulation
In a simply connected domain without current the stray field h⃗_⃗m⃗ of a magnetic body is <cit.>
h⃗_⃗m⃗=-∇ u
with the magnetic scalar potential u:
Δ u = ∇M⃗ in Ω
and the Dirichlet boundary condition
u = 0 on ∂Ω
where M⃗ is the magnetization, and Ω=Ω_m ∪Ω_h ∪Ω_a the different regions (Supplementary Fig. 3).
This PDE is solved by using a FEM implementation based on FEniCS2016.1 <cit.>. The outer air region Ω_a is necessary to approximate the far field of the potential u in Ω. Air regions Ω_a which is around five-times larger than the magnetic region Ω_m, gives a good compromise between accuracy of the stray field calculation and computing time. To reduce computing time the mesh size is coarser at the edge of the outer air region.
§.§ Printer
For the printing process the conventional end-user 3D printer Builder from Code P is chosen. This printer works by use of the fusing deposition modeling (FDM) principle. This system creates the object layer by layer by a meltable thermoplastic. It has a maximum building size of 220×210×164 mm^3 (L×W×H). Structures with a layer height resolution between 0.05 and 0.3 mm can be printed. The printing speed ranges from 10 to 80 mm/s, and the traveling speed from 10 to 200 mm/s. The nozzle diameter is 0.4 mm, and by the means of a dual feed extruder two different compound materials can be mixed, or defined region of the object can be printed with different materials. The maximum nozzle temperature is 260 ^∘C. For a better adhesion of the printed objects the printer bed can be heated up to 80 ^∘C. The optimal printing parameter for our setup and our magnetic compound filaments are listed in Tab. <ref>.
§.§ Filament Manufacturing
The polymer bonded magnetic compound consists of polyamide 12 (PA12 or also called as Nylon 12) from Polyking (221-TR) and magnetically isotropic powder MQP-S-11-9 with the chemical composition NdPrFeCoTiZrB from Magnequench Corporation. These source materials are compounded and extruded into suitable filaments in the desired ratio of 85 wt.-% MQP-S-11-9 powder and 15 wt.-% PA12. The extrusion is performed at University of Leoben with a Leistritz ZSE 18 HPe-48D twin-screw extruder. The materials are dried at 80 ^∘C for 8 hours. The four heating zones of the twin-screw are temperate. The feed section is the coolest with 80 ^∘C and the temperature increases up to the shaping die which has a temperature of 260 ^∘C. The round orifice of the die has a diameter of 1.75 mm. The hot extrudate is hauled off and cooled by a cooled conveyor belt.. The diameter and tolerances of the filament is controlled by a Sikora Laser Series 2000 diameter measuring system. The extrusion speed is adjusted to get a filament with a diameter of 1.75 mm. The manufactured filament is spooled with a diameter of around 0.5 m to avoid breaking of the brittle magnetic filaments.
§.§ Material Characterization
The fraction of NdFeB particles in the PA12 matrix is measured by thermogravimetric analysis (TGA). The model TGA 2050 from TA-Instruments has a resolution of 0.2 µg and a temperature range of 25-1000 ^∘C. In our case a heating rate of 10 K/min and a nitrogen atmosphere to avoid oxidation of the particles is used. TGA measurement yields a filler content of 85 wt.%. Hysteresis measurements are performed for different magnetic compound fractions ϱ_m. With the dual extruder of our printer cubes with a size of 7 mm are printed and afterwards post processed to obtain cubes with a length of a of 5±0.02 mm. The hysteresis is measured by Pulsed Field Magnetometry (PFM) (Hirst PFM11) <cit.>. All measurements are carried out with the same parameters - temperature of 297 K and a magnetic field up to 4 T peak field. The internal field is H_int=H_ext-N/μ_0 J. Where H_ext is the external field, N is the average demagnetisation factor for a cube (N=1/3) <cit.>, and J is the material polarization. The morphology of the NdFeB particles is identified by Scanning Electron Microscope (FEI Quanta 200 FEG) images. The samples are Au coated with a Sputter Coater Quorum (Q150T S). The particles are of spherical shape with a diameter of approximately 45±20 µm (Supplementary Fig.1).
§.§ Magnetization
The objects with a variable magnetic compound density are magnetized inside an electromagnet. It is a self-build water cooled electromagnet, and it is powered by a low voltage power supply (Siemens NTN 35000-200). Maximum output current is 150 A with a operating voltage of 200 V. This setup has a maximum magnetic flux density inside the electromagnet of 1.9 T in in a permanent operation mode. The gap between the pole shoes is 50 mm.
§.§ Stray Field Measurement
To measure the stray field of the printed permanent magnet, the 3D printer is upgraded to a full 3D magnetic flux density measurement system. As sensing device, a 3D Hall sensor TLV493D-A186 from Infineon is used. A Genuino 101 microcontroller is programmed to read out the components of B⃗ with a frequency of 3 kHz. The sensor has a measurement range of ±130 mT, and a measured detectivity of 40 µT/√(Hz) for static magnetic fields. A Python script controls the movement of the 3D printer and saves the stray field measurement data for the actual position of the sensor. This setup has a spatial resolution of 0.05 mm along the z-axis and 0.1 mm along the x and y-axis. To skip an elaborate adjusting and alignment of the sensor a calibration method on detailed stray field simulation is used <cit.>. With this method the angles, sensitivity, and the offset of the sensor can be calibrated. In our case the sensor is simply attached to the extruder head with a self printed suspension without any adjustment (Supplementary Fig. 5). With this setup the stray field can be scanned in 1D, 2D, and 3D around a complex magnetic structure.
aipnum4-1
24
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Guo and Leu(2013)]3d-print
author author N. Guo and author M. C. Leu, @noop journal journal Frontiers of
Mechanical Engineering volume 8, pages
215 (year 2013)NoStop
[Wohlers, Caffrey, and Campbell(2016)]wohlers
author author T. T. Wohlers, author T. Caffrey, and author R. I. Campbell, @noop title Wohlers Report 2016: 3D Printing and
Additive Manufacturing State of the Industry Annual Worldwide Progress
Report (publisher Wohlers Associates, year
2016)NoStop
[Ma et al.(2002)Ma,
Herchenroeder, Smith, Suda,
Brown, and Chen]recent_devel
author author B. Ma, author J. Herchenroeder,
author B. Smith, author M. Suda, author
D. Brown, and author
Z. Chen, @noop journal journal Journal of Magnetism and Magnetic Materials volume 239, pages 418 (year
2002)NoStop
[Ormerod and Constantinides(1997)]bonded_mag_future
author author J. Ormerod and author S. Constantinides, @noop journal journal
Journal of Applied Physics volume 81, pages 4816 (year 1997)NoStop
[Elian and Theuss(2014)]ibb
author author K. Elian and author H. Theuss, in @noop booktitle Electronics
System-Integration Technology Conference (ESTC), 2014 (year
2014) pp. pages 1–5NoStop
[Eimeke et al.(2008)Eimeke,
Gardocki, Ehrenstein, and Drummer]diss_drummer
author author S. Eimeke, author A. Gardocki,
author G. Ehrenstein, and author D. Drummer, @noop
journal journal Journal of Plastics
Technology volume 5 (year
2008)NoStop
[Gonzalez-Gutierrez et al.(2016)Gonzalez-Gutierrez, Duretek,
Kukla, Poljšak, Bek,
Emri, and Holzer]models_visc
author author J. Gonzalez-Gutierrez, author I. Duretek, author C. Kukla,
author A. Poljšak, author M. Bek, author
I. Emri, and author
C. Holzer, @noop journal journal Metals volume 6, pages 129 (year 2016)NoStop
[Huber et al.(2016)Huber,
Abert, Bruckner, Groenefeld,
Muthsam, Schuschnigg, Sirak,
Thanhoffer, Teliban, Vogler,
Windl, and Suess]pub_16_1_apl
author author C. Huber, author C. Abert,
author F. Bruckner, author M. Groenefeld, author
O. Muthsam, author S. Schuschnigg, author K. Sirak, author R. Thanhoffer, author I. Teliban, author C. Vogler, author R. Windl, and author D. Suess, @noop journal journal Applied Physics Letters volume 109
(year 2016)NoStop
[Li et al.(2016)Li,
Tirado, Nlebedim, Rios,
Post, Kunc, Lowden,
Lara-Curzio, Fredette, Ormerod, Lograsso, and Paranthaman]baam
author author L. Li, author A. Tirado, author I. C. Nlebedim, author
O. Rios, author B. Post, author V. Kunc, author R. R. Lowden,
author E. Lara-Curzio, author R. Fredette, author
J. Ormerod, author T. A. Lograsso, and author M. P. Paranthaman, @noop journal
journal Scientific Reports volume 6, pages 36212 (year 2016)NoStop
[Burkhardt(2015)]reduction_ndfeb
author author C. Burkhardt, in @noop booktitle Pforzheimer
Werkstofftag (year 2015)NoStop
[Abert et al.(2013)Abert,
Exl, Selke, Drews, and Schrefl]stray_field_fem
author author C. Abert, author L. Exl, author G. Selke, author
A. Drews, and author
T. Schrefl, @noop journal journal Journal of Magnetism and Magnetic Materials volume 326, pages 176 (year
2013)NoStop
[Fredkin and Koehler(1990)]fredkin
author author D. R. Fredkin and author T. R. Koehler, @noop journal journal IEEE
Transactions on Magnetics volume 26, pages 415 (year 1990)NoStop
[Bruckner et al.(2016)Bruckner, Abert, Wautischer,
Huber, Vogler, Hinze, and Suess]inverse_flo
author author F. Bruckner, author C. Abert,
author G. Wautischer, author C. Huber, author
C. Vogler, author
M. Hinze, and author
D. Suess, @noop journal journal ArXiv e-prints (year
2016), http://arxiv.org/abs/1609.00060 arXiv:1609.00060
[physics.comp-ph] NoStop
[Logg(2013)]fenics
author author A. Logg, @noop title Automated Solution of
Differential Equations by the Finite Element Method (Lecture Notes in
Computational Science and Engineering) (publisher
Springer, year 2013)NoStop
[Funke and Farrell(2013)]dolfin
author author S. W. Funke and author P. E. Farrell, @noop journal journal
ArXiv e-prints (year 2013), http://arxiv.org/abs/1302.3894 arXiv:1302.3894 NoStop
[Tikhonov et al.(1995)Tikhonov, Goncharsky, Stepanov, and Yagola]inverse
author author A. Tikhonov, author A. Goncharsky, author V. Stepanov, and author A. G. Yagola, @noop title Numerical Methods for
the Solution of Ill-Posed Problems (Mathematics and Its Applications) (publisher Springer, year 1995)NoStop
[O'Leary and Prost(1993)]l-curve
author author O'Leary and author D. Prost, @noop
journal journal Society for Industrial and
Applied Mathematics volume 14, pages
1287 (year 1993)NoStop
[Wächter and Biegler(2006)]ipopt
author author A. Wächter and author L. T. Biegler, @noop journal journal
Mathematical Programming volume 106, pages 25 (year 2006)NoStop
[Ortner(2015)]linear
author author M. Ortner, in @noop booktitle 2015 9th
International Conference on Sensing Technology (ICST) (year
2015) pp. pages 359–364NoStop
[Schöberl(1997)]netgen
author author J. Schöberl, @noop journal journal
Computing and Visualization in Science volume 1, pages 41 (year 1997)NoStop
[Jackson(1999)]jackson
author author J. Jackson, @noop title Classical
electrodynamics (publisher Wiley, address New
York, year 1999)NoStop
[Groesinger(2008)]pfm2
author author R. Groesinger, @noop journal journal
Journal of Electrical Engineering volume 59, pages 15 (year 2008)NoStop
[Fiorillo et al.(2007)Fiorillo, Beatrice, Bottauscio, and Patroi]pfm
author author F. Fiorillo, author C. Beatrice,
author O. Bottauscio, and author E. Patroi, @noop
journal journal IEEE Transactions on Magnetics volume 43, pages 3159 (year
2007)NoStop
[Aharoni(1998)]demag
author author A. Aharoni, @noop journal journal
Journal of Applied Physics volume 83, pages 3432 (year 1998)NoStop
§ ACKNOWLEDGEMENT
The support from CD-Laboratory AMSEN (financed by the Austrian Federal Ministry of Economy, Family and Youth, the National Foundation for Research, Technology and Development) is acknowledged. The authors would like to thank Magnetfabrik Bonn GmbH for the provision of the compound material, and to Montanuniversitaet Leoben for the extrusion of the filaments. The SEM and sample preparations are carried out using facilities at the University Service Centre for Transmission Electron Microscopy, TU-Vienna, Austria. The computational results presented have been achieved using the Vienna Scientific Cluster (VSC).
|
http://arxiv.org/abs/1701.08140v3 | 20170127182638 | Network classification with applications to brain connectomics | [
"Jesús D. Arroyo-Relión",
"Daniel Kessler",
"Elizaveta Levina",
"Stephan F. Taylor"
] | stat.ME | [
"stat.ME",
"stat.ML"
] |
Network classification
,
J.D. Arroyo Relión et al.
Johns Hopkins Universitym1 and University of Michiganm2
J. D. Arroyo Relión
Center for imaging science
Johns Hopkins University
Baltimore, Maryland 21218-2682
USA
e1
D. Kessler
E. Levina
Department of Statistics
University of Michigan
Ann Arbor, Michigan 48109–1107
USA
e2
E-mail:
*e3
S. F. Taylor
Department of Psychiatry
University of Michigan
Ann Arbor, Michigan 48109–1107
USA
e4
While statistical analysis of a single network has received a lot of attention in recent years, with a focus on social networks, analysis of a sample of networks presents its own challenges which require a different set of analytic tools. Here we study the problem of classification of networks with labeled nodes, motivated by applications in neuroimaging. Brain networks are constructed from imaging data to represent functional connectivity between regions of the brain, and previous work has shown the potential of such networks to distinguish between various brain disorders, giving rise to a network classification problem. Existing approaches tend to either treat all edge weights as a long vector, ignoring the network structure, or focus on graph topology as represented by summary measures while ignoring the edge weights. Our goal is to design a classification method that uses both the individual edge information and the network structure of the data in a computationally efficient way, and that can produce a parsimonious and interpretable representation of differences in brain connectivity patterns between classes. We propose a graph classification method that uses edge weights as predictors but incorporates the network nature of the data via penalties that promote sparsity in the number of nodes, in addition to the usual sparsity penalties that encourage selection of edges. We implement the method via efficient convex optimization and provide a detailed analysis of data from two fMRI studies of schizophrenia.
graph classification, high-dimensional data, variable selection, fMRI data
§ INTRODUCTION
Network data analysis has received a lot of attention in recent literature, especially unsupervised analysis of a single network which is thought of as generated from an exchangeable random graph model, for example <cit.> and many others. This setting is applicable to a number of real life scenarios, such as social networks, but there are situations where the network nodes are labeled and therefore not exchangeable, and/or more than one network is available for analysis, which have received relatively less attention. Here we focus on the setting motivated by brain connectomics studies, where a sample of networks is available from multiple populations of interest (for example, mentally ill patients and healthy controls). In this setting, each unit in the population (e.g., a patient) is represented by their own network, and the nodes (brain regions of interest) are labeled and shared across all networks through a registration process that maps all individual brains onto a common atlas. There are many classical statistical inference questions one can ask in this setting, for example, how to compare different populations <cit.>. The question we focus on in this paper is a classification problem: given a training sample of networks with labeled nodes drawn from multiple classes, the goal is to learn the rules for predicting the class of a given network, and just as importantly, interpret these rules.
Network methods are a popular tool in the neuroscience literature <cit.>. A brain network represents connectivity between different locations of an individual's brain. How connectivity is defined varies with the type of imaging technology used and the conditions under which data were collected. In this paper, we focus on functional connectivity, which is a measure of statistical association between each pair of locations in the brain, constructed from functional magnetic resonance imaging (fMRI) data, although the methods we develop are applicable to any sample of weighted networks with labeled nodes. In fMRI studies, BOLD (blood oxygen-level dependent) signal, a known correlate of underlying neural activity, is measured at a sequence of time points at many spatial locations in the brain, known as voxels, resulting in a 4-dimensional data array, with three spatial dimensions and a time index. Brain networks constructed from fMRI data have been successfully used for various tasks, such as differentiating between certain illnesses, or between types of external stimuli <cit.>, and contain enough information to identify individual subjects <cit.>. Extensive statistical literature has focused on the analysis of raw fMRI data <cit.>, usually aiming to characterize brain activation patterns obtained from task-based fMRI experiments. In this paper, we focus on resting-state fMRI data, where no particular task is performed and subjects are free to think about anything they want. Thus registering the time dimension across different subjects is not possible. The connectivity network approach, which averages over the time dimension in computing a measure of dependence between different voxels, is thus a natural choice, and has been widely used with multiple types of neuroimaging data.
Two different datasets are analyzed in this paper, both of resting state fMRI studies on schizophrenic patients and healthy controls. One dataset, COBRE (54 schizophrenics and 70 controls), is publicly available <cit.>; another, which we will refer to as UMich data (39 schizophrenics and 40 controls), was collected internally in the last author's lab. Having two datasets on the same disease allows us to cross-check models trained on one of them for classification on the other to check the robustness of our approach. The raw data arrays undergo pre-processing and registration steps, discussed in detail in the Appendix <ref>, along with additional details on data collection. To construct a brain network from fMRI measurements, a set of nodes is chosen, typically corresponding to regions of interests (ROIs) from some predefined parcellation. In our analysis we use the parcellation of <cit.> (see Figure <ref>), which consists of 264 ROIs divided into 14 functional brain systems (in the COBRE data, node 75 is missing). A connectivity measure is then computed for every pair of nodes, resulting in an adjacency matrix of size 264× 264. Many choices of connectivity measures are available <cit.>; perhaps the most commonly used one is the Pearson correlation coefficient between locations, computed by averaging over the time dimension. It has been argued that partial correlations are a better measure of connectivity <cit.>, but the choice depends on the final goal of analysis. In this paper we follow the vast majority of the connectomics literature and measure connectivity on each individual by using marginal correlations between the corresponding time series (see Figure <ref>). The correlations are then further rank-transformed and standardized; see Appendix <ref> for details. These transformations are intended to deal with subject-to-subject variability and the global signal regression issue <cit.>, and although they lose some information, we observed that on our datasets classification based on standardized ranks of marginal correlations outperformed classification based on other connectivity measures, such as marginal correlations. The methods we develop here are applicable to networks that encode any type of connectivity measure.
The problem of graph classification has been studied previously in other contexts, with a substantial literature motivated by the problem of classification of chemical compounds <cit.>, where graphs represent the compound's molecular structure. This setting is very different, with small networks of about 20 nodes on average, binary or categorical edges recorded with no noise, and different nodes corresponding to different networks, <cit.>.
Classification methods for chemical compounds are usually based on finding certain discriminative patterns in the graphs, like subgraphs or paths <cit.>, and using them as features for training a standard classification method <cit.>. Computationally, finding these patterns is only possible on small binary networks.
Another type of methods are based on graph kernels <cit.>, which define a similarity measure between two networks. These kernels combined with support vector machines (SVMs) have been successfully used on small networks <cit.>, but the curse of dimensionality makes local kernel methods unsuitable for large scale networks <cit.>. On our datasets, graph kernel methods did not perform better than random guessing.
In the context of classifying large-scale brain networks, two main approaches have been followed. One approach is to reduce the network to its global summary measures such as the average degree, clustering coefficient, or average path length <cit.>, and use those measures as features for training a classification method. Previous studies have reported significant differences on some of these network measures for groups of patients with certain brain diseases compared with healthy controls
<cit.>, suggesting their usefulness as diagnostic biomarkers. However, global summary statistics collapse all local network information, which can harm the accuracy of classification and do not allow to identify local differences. In our data analysis, a method based on the network measures suggested in <cit.> performed poorly for classification (see Section <ref>).
An alternative approach to classification of large networks is to treat edge weights as a “bag of features”, vectorizing the unique elements of the adjacency matrix and ignoring the network nature of the data. This approach can leverage many existing classification methods for vectors, and provides an interpretation at the edge level if variable selection is applied <cit.>. Spatial correlation between edges connecting neighboring nodes can be incorporated <cit.>, although the effectiveness of this regularization will depend on the parcellation used to define nodes. Alternatively, an individual test can be used for each edge to find significant differences between two populations, with a multiple testing correction and without constructing a classifier at all <cit.>. While these methods can deliver good predictions, their interpretability is limited to individual edge selection, which is less scientifically interesting than identifying differentiating nodes or regions, and they cannot account for network structure.
Taking the network structure into account can have benefits for both testing and classification settings. Some methods perform inference over groups of edges based on the community assignments of the nodes to which they are incident. For example, <cit.> introduced Network Contingency Analysis which begins with massive univariate testing at each edge, and then counts the number of superthreshold connections in each cell, a group of edges that connect nodes in two functional systems. Nonparametric methods are then used to conduct inference on the count statistic for each cell, with multiple comparison correction for inference at the cell level. Power can be improved by applying a network-based multiple testing dependence correction <cit.>. For classification, better interpretability and potentially accuracy can be obtained if we focus on understanding which brain regions or interactions between them are responsible for the differences. In somewhat related work, <cit.> proposed to look for a minimal set of nodes which best explains the difference, though that requires solving a combinatorial problem. Hypothesis testing on a type of graph average has also been proposed <cit.>. Bayesian nonparametrics approaches for modeling populations of networks allow to test for local edge differences between the groups <cit.>, but are computationally feasible only for small networks.
Our goal in this paper is to develop a high-dimensional network classifier that uses all the individual edge weights but also respects the network structure of the data and produces more interpretable results. To achieve this goal, we use structured sparsity penalties to incorporate the network information by penalizing both the number of edges and the number of nodes selected. Although our main application here is classification of brain connectivity networks, our methods are applicable to any weighted graphs with labeled nodes, and to general prediction problems, not just classification.
The rest of this paper is organized as follows. In Section <ref>, we introduce our classifier and the structured penalties. In Section <ref> we show how to efficiently solve the resulting convex optimization problem by a proximal algorithm, each step of which is a further optimization problem which we solve by the alternating direction method of multipliers (ADMM).
The performance of our method is evaluated and compared with other methods using simulations in
Section <ref>.
In Section <ref>, we analyze two brain connectivity datasets, each containing schizophrenic patients and healthy controls, and show that our regularization framework leads to state-of-the-art accuracy while providing interpretable results, some of which are consistent with previous findings and some are new. We conclude with a brief discussion in Section <ref>.
§ A FRAMEWORK FOR NODE SELECTION IN GRAPH CLASSIFICATION
§.§ A penalized graph classification approach
We start from setting up notation. All graphs we consider are defined on the same set of N labeled nodes.
A graph can be represented with its adjacency matrix A∈^N× N. We focus on graphs that are undirected (A_ij = A_ji) and contain no self-loops (A_ii = 0). These assumptions are not required for the derivations below, but they match the neuroimaging setting and simplify notation. Our goal is predicting a class label Y from the graph adjacency matrix A; in this paper we focus on the binary classification problem where Y takes values {-1,1}, although extensions from binary to multi-class classification or real-valued responses are straightforward. Throughout this paper, we use ·_p to denote the entry-wise ℓ_p norm, i.e., for a matrix A∈^m× n, A_p=(∑_i=1^m∑_j=1^n|A_ij|^p)^1/p.
A standard general approach is to construct a linear classifier, which predicts the response Y from a linear combination of the elements of A, ⟨ A,B⟩ = B^TA, where we arrange the coefficients in a matrix B∈^N× N to emphasize the network nature of the predictors. We focus on linear classifiers here because variable selection is at least as important as prediction itself in the neuroimaging application, and setting coefficients to 0 is a natural way to achieve this. The coefficients are typically estimated from training data by minimizing an objective consisting of a loss function plus a penalty. The penalty can be used to regularize the problem to make the estimator well-defined in high-dimensional problems, to select important predictors, and to impose structure, and many such penalties have been proposed, starting from the lasso <cit.>. Our focus is on designing a classifier in this framework that respects and utilizes the network nature of the predictors. In brain networks in particular, neuroscientists believe that edges are organized in subnetworks, also called brain systems <cit.>, that carry out specific functions, and
certain subnetworks are important for prediction <cit.>, although different studies tend to implicate different regions <cit.>. Thus we aim to find nodes or subnetworks with good discriminative power, and hence select both the most informative nodes and edges.
Although the methods we develop here can be used on small networks, our main focus is on the more challenging case of medium to large brain networks. In brain connectivity studies dealing with multiple subjects, while raw images may contain hundreds of thousands of voxels, they are commonly down-sampled according to a parcellation scheme with a coarser resolution, usually resulting in networks with hundreds or thousands of nodes representing ROIs (see for example <cit.>). This coarser resolution is essential for registration, as aligning different brains at a high resolution is much harder, but it still results in hundreds of thousands or millions of edges which serve as predictor variables. Given the typical data sizes in this area of application, we focus on methods based on convex formulations, which allow for efficient and scalable implementations with convergence guarantees.
Let {(A^(1),Y_1),…,(A^(n),Y_n)} be the training sample of undirected adjacency matrices with their class labels,
and let Y=(Y_1,…,Y_n).
A generic linear classifier described above is computed by finding the coefficients B defined by
B̂ = _B∈{ℓ(B ) + Ω(B)},
where = {B∈^N× N:B=B^T, diag(B) = 0}, Ω is a penalty, and
ℓ(B) = 1/n∑_k=1^n ℓ̃(Y_k, A^(k); B )
is a loss function evaluated on the training data. Our methodology can accommodate different choices of loss functions that can extend beyond classification problems (e.g., least squares or generalized linear models). The optimization algorithm presented in Section <ref> can work with any convex and continuously differentiable loss function, and further assumptions are required for consistency (see Section <ref>). In this paper, for the purpose of classification we use the logistic loss function in the simulations and data analysis, which is defined as
ℓ̃(Y_k, A^(k);B,b) = log{1+exp(-Y_k ⟨ A^(k), B⟩+b)} .
The threshold b is an additional parameter to be estimated.
To capture structural assumptions on important predictive edges, we focus on convex structured sparsity penalties <cit.> that encourage a small number of active nodes, by which we mean nodes attached to at least one edge with a non-zero coefficient. One approach to finding a set of such nodes was proposed by <cit.>, who called it a signal-subgraph, and proposed finding the minimal set of nodes (called signal vertices) which together are incident to all selected edges (but not every node connected to a selected edge is a signal vertex). Finding this set is a combinatorial optimization problem, and the set is not always uniquely defined. Instead, we focus on convex formulations that allow for efficient computation and encourage small active node sets indirectly.
Other convex penalties have been used for fMRI data as a way to enforce spatial smoothness in the solution <cit.>. These methods assume that voxels are equally spaced in the brain, and neighboring voxels are highly correlated. In particular, <cit.> proposed penalties for brain network classification using these spatial assumptions. Here, instead of enforcing a spatial regularization directly, we aim for a regularization that can be applied to any type of network data, and in particular to brain networks with coarse and/or uneven parcellations where enforcing spatial smoothness may not work as well. In any case, the flexibility of convex optimization algorithms allows one to easily incorporate additional spatially-informed penalties if needed.
§.§ Selecting nodes and edges through group lasso
To reflect the network structure of the predictors, we use a penalty that promotes a sparse classifier not only in the number of edges used, but also in the number of nodes. The group lasso penalty <cit.> is designed to eliminate a group of variables simultaneously. Here we penalize the number of active nodes by treating all edges connected to one node as a group. Then eliminating this group (a row of coefficients in the matrix B) is equivalent to de-activating a node. The group penalty is defined as
Ω_λ, ρ(B)= λ(∑_i=1^NB_(i)_2 + ρB_1),
where B_(i) denotes the vector of edge weights incident to the i-th node (or equivalently, the i-th row or column of B), and λ, ρ≥ 0 are tuning parameters. Note that the constraint B=B^T makes the groups overlap, since a coefficient B_ij belongs to groups associated with the nodes i and j, and therefore, the edge between nodes i and j would be selected only if both nodes are activated. The second term in the penalty ρB_1 acts as the usual lasso penalty to promote sparsity inside the group <cit.>, allowing to select a subset of edges for an active node. Due to the overlap in the groups, this lasso penalty is usually necessary in order to produce sparse solution (see Proposition <ref>). The constraint diag(B) = 0 in (<ref>) is automatically enforced with this formulation.
An alternative to the constraint in the problem (<ref>) is to optimize over the set
= {B∈^N× N, diag(B) = 0}.
Without the symmetry constraint and assuming undirected graphs, the penalty (<ref>) is equivalent to the overlapping group lasso formulation of <cit.>. This formulation has some advantages. Since it gives group lasso without overlaps, the lasso penalty ρB_1 is not required to obtain sparse solutions, and more efficient optimization algorithms exist for this case. This approach would loosely correspond to the idea of selecting signal nodes as in <cit.>, in the sense that an edge can be selected if at least one of its nodes is selected, and the second node could be inactive. The downside is that each edge now corresponds to two different coefficients B_ij and B_ji, the problem encountered by all variable selection methods that ignore symmetry, such as <cit.>. The standard solution for this problem, as suggested by <cit.>, is to take the average of the coefficients.
Intuitively, one would expect that the formulation using would be better when the significant edges are incident to a small set of nodes, since both nodes have to be active for an edge to be selected, while using may be better when for some nodes most of their edges are significant, creating “significant hubs”. Since in our application we are primarily looking for discriminative brain subnetworks, we focus on the symmetrically constrained formulation for the rest of the paper. We also found that in practice this second formulation results in less accurate classifiers for the neuroimaging data discussed in Section <ref>.
The analogue to (<ref>) for directed graphs would assign coefficients B_ij and B_ij to the same group, resulting in the penalty
Ψ_λ, ρ(B)= λ(∑_i=1^N√(∑_j(B_ij^2+B_ji^2)) + ρB_1),
where B∈ℬ̃. Alternatively, we can also use the formulation of Remark <ref>, by replicating the variables and estimating two matrices of coefficients, say B^(1) and B^(2), with the penalty
Ψ̃_λ, ρ(B^(1),B^(2)) = λ[∑_i=1^N√(∑_j{(B^(1)_ij)^2+(B^(2)_ji)^2}) + ρ(B^(1)_1 + B^(2)_1 )],
with B^(1),B^(2)∈ℬ̃, and set the coefficients matrix to B=(B^(1) + B^(2))/2. This formulation will again not directly select subnetworks as discussed in Remark <ref>.
Finally, for numerical stability we add an extra ridge penalty term γ/2B_F^2=γ/2Tr(B^TB), with γ a small and fixed constant. There are several benefits of combining ridge and lasso penalties (see for example <cit.>). The parameter γ can be potentially considered as an additional tuning parameter, but here we only use a small fixed constant γ in order to avoid numerically degenerate solutions. In practice, the results are not sensitive to the exact value of γ.
Putting everything together, to fit our graph classifier, we solve the problem
(B̂,b̂) = _B∈ℬ,b∈ {1/n∑_k=1^nlog(1+exp(-Y_k ⟨ B,A^(k)⟩+b)) + γ/2B_F^2 + .
. λ( ∑_i=1^NB_i_2+ρB_1)}
for given values of λ and ρ, which will be chosen by cross-validation.
§ THE OPTIMIZATION ALGORITHM
Our optimization algorithm to solve the problem (<ref>) combines two common approaches to convex optimization: proximal algorithms and alternating direction method of multipliers (ADMM). We use an accelerated version of the proximal algorithm <cit.> to solve the main problem (<ref>). In each step, we need to calculate a proximal operator, which is a further convex optimization problem solved with the ADMM algorithm.
The main optimization difficulty comes from the overlapping groups. Some algorithms have been proposed for this case, including a subgradient descent method <cit.>, which has a slow rate of convergence, or proximal algorithms based on smoothing the original problem <cit.>. Although smoothing yields fast algorithms, it is not clear that the sparsity pattern is preserved with those approximations. We follow an approach similar to <cit.> and <cit.>, but solve the proximal operator for the penalty (<ref>) directly using the ADMM method. This can potentially give a more accurate sparsity pattern, and the flexibility of the algorithm allows for additional penalties if desired, such as spatial smoothing similar to <cit.> (see Remark <ref>).
The main problem (<ref>) is solved with a proximal algorithm (see <cit.>). Recall that the proximal operator for a function f is defined as _f(v) = _x{f(x)+1/2x-v_2^2}. Starting with an initial value B^(0)∈^N× N, a proximal algorithm solves the optimization problem (<ref>)
by iteratively calculating the proximal operator of Ω=Ω_λ,ρ for a descent direction of the differentiable loss function ℓ. We use an accelerated version of the algorithm <cit.>, which for each k=2,…, until convergence, performs the updates
W^(k) = B^(k-1) + k-1/k+2(B^(k-1) - B^(k-2))
B^(k) = _t^(k)Ω{W^(k) - t^(k)∇ℓ(W^(k))}
= _B∈{1/2B-(W^(k)-t^(k)∇ℓ(W^(k)))_2^2 + t^(k)Ω(B) },
where ∇ℓ(W)∈^N× N is the gradient of the loss function ℓ at W and {t^(k)} is a sequence of positive values. If ∇ℓ is Lipschitz continuous, with L its Lipschitz constant, the sequence of values ℓ(B^(k))+Ω(B^(k)) converges to the optimal value at rate O(1/k^2) if t^(k)∈[0,1/L). The value of t^(k) can be chosen using a backtracking search <cit.>, which decreases this value until the condition
ℓ(B^(k)) ≤ℓ(W^(k)) + ⟨∇ℓ(W^(k)), B^(k)-W^(k)⟩ + 1/2t^(k)B^(k)-W^(k)_2^2
is satisfied. This procedure ensures that step sizes {t^(k)} become smaller as the algorithm progresses, until t^(k)<1/L. In practice, L might be large, which can make the algorithm slow to converge. It has been observed in other sparse high-dimensional problems that search strategies for t^(k) which allow for t^(k) > 1/L when appropriate can actually speed up convergence <cit.>.
We use a strategy of this type, allowing t^(k) to increase by a factor of α≥ 1 if the relative improvement in the loss function on iteration k becomes small. We observed that this strategy significantly reduces the number of iterations until convergence. The entire procedure is summarized in Algorithm <ref> on the Appendix <ref>.
The logistic loss function of (<ref>) has an extra parameter b. Rather than including it as an unpenalized coefficient for a constant covariate, we use block coordinate descent and solve for b separately. This is convenient because the threshold b and the matrix of coefficients B may not be on the same scale. Thus, b can be updated by solving b^(k+1) = _b∈ℓ(B^(k),b), which is easy to compute via Newton's method.
The proximal algorithm requires solving the proximal operator (<ref>), which has no closed form solution for the penalty (<ref>) under the symmetry constraint. Strategies based on smoothing this penalty have been proposed <cit.>. However, to allow for variable selection which results from non-differentiability of the penalty, we aim to solve the proximal operator directly using ADMM (see <cit.> for a review). Note that if the symmetric constraint is relaxed as in Remark <ref>, the proximal operator has a closed form solution (see Remark <ref>).
The ADMM works by introducing additional constraints and performing coordinate descent in the corresponding augmented Lagrangian function. Setting Z=W^(k)-t^(k)∇ℓ(W^(k)) and t=t^(k), and introducing the variables Q,R∈^N× N, we can formulate (<ref>) as a convex optimization problem
min_B̃,Q,R 1/2B̃-Z_2^2 + tλ(∑_i=1^NQ_(i)_2 + ρR_1)
subject to B̃ = Q, B̃ = R, B̃ = B̃^T, diag(B̃) = 0.
The ADMM algorithm introduces the multipliers U,V∈^N× N and a penalty parameter μ>0 to perform gradient descent on the Lagrangian of (<ref>), given by ℒ_μ=ℒ_μ(B̃,Q,R,U,V) as
ℒ_μ = 1/2B̃-Z_2^2+ tλ(∑_i=1^NQ_(i)_2 + ρR_1) + ⟨ U, B̃-Q⟩+
⟨ V, B̃-R⟩μ/2(B̃-Q_2^2+B̃-R_2^2 + B̃-B̃^T_2^2).
The value μ controls the gap between dual and primal feasibility. In practice, we observed that setting μ = 0.1 gives a good balance between primal and dual feasibility, although other self-tuning methods are available <cit.>. This function is optimized by coordinate descent, with each variable updated to minimize the value of ℒ_μ while all the other variables are fixed. This update has a closed form; the detailed steps of the ADMM are shown in Algorithm <ref> on the Appendix <ref>. These steps are performed until the algorithm converges within tolerance ϵ^>0.
Note that ADMM will be performed in each iteration of the algorithm to solve (<ref>) and thus tolerance ϵ^ can be decreased as the algorithm progresses. On the other hand, performing only one iteration of algorithm (<ref>) gives a similar algorithm to the one of <cit.>.
The ADMM makes it very easy to incorporate additional penalties. If Ψ is a new penalty, we can rewrite (<ref>) by introducing an additional parameter Q̃ so it becomes
min_B̃,Q,Q̃,R 1/2B̃-Z^(k)_2^2 + tλ(∑_i=1^NQ_(i)_2 + ρR_1) + tΨ(Q̃)
subject to B̃ = Q, B̃ = Q̃, B̃ = R, B̃ = B̃^T, diag(B̃) = 0.
We can obtain the Lagrangian formulation (<ref>) in a similar manner, and include new parameters in the ADMM updates, which can be performed efficiently as long as the proximal operator of Ψ has a closed form solution. This is in fact the case for some other penalties of interest, such as the GraphNet penalty <cit.>, which can incorporate spatial location information.
The alternative formulation for the graph penalty given in Remark <ref> corresponds to standard sparse group lasso <cit.>. In particular, we can still employ the proximal algorithms (<ref>) and (<ref>), but instead optimize over the set . Without the symmetric constraint on B, the overlap in the group lasso penalty disappears, and this vastly simplifies the problem. Using Theorem 1 of <cit.>, the update for B^(k) has a closed form solution given by
Y^(k) = B^(k-1) + k-2/k(B^(k-1) - B^(k-2))
Z^(k)_ij = (1- λρ/|Y^(k)_ij-t_k∇_ijℓ(Y^(k))_2)_+(Y_ij^(k)-t_k∇_ijℓ(Y^(k)))
B^(k)_(i) = (1-λ/Z^(k)_(i)_2)_+(Z^(k)_(i)), i∈[N].
§ THEORY
In this section, we show that the solution of the penalized problem (<ref>) can recover the correct subgraph corresponding to the set of non-zero coefficients, and give its rates of convergence. The theory is a consequence of the results of <cit.> for establishing model selection consistency of regularized M-estimators under geometric decomposability (see Appendix for details). We present explicit conditions for our penalty to work well, which depend on the data as well as the tuning parameters.
Let B^⋆⊂^N× N be the unknown parameter we seek to estimate, and we assume there is a set of active nodes 𝒢⊂[N] with |𝒢|=G, so that B^⋆_ij=0 if i∈𝒢^C or j∈𝒢^C. We allow some edge weights inside the subgraph defined by 𝒢 to be zero, but we focus on whether the set 𝒢 is correctly estimated by the set 𝒢̂ of active nodes in B̂. Denote by ℳ⊆^N× N the set of matrices where the only non-zero coefficients appear in the active subgraph, that is,
ℳ={B∈^N× N| B_ij=0 for all i∈𝒢^C or j∈𝒢^C, B=B^T.}
There are two main assumptions on the loss function ℓ required for consistent selection in high-dimensional models <cit.>. The first assumption is on the convexity of the loss function around B^⋆, while the second assumption bounds the size of the entries in the loss Hessian between the variables in the active subgraph and the rest. Let the loss Hessian ∇^2ℓ(B^⋆)∈^N× N⊗^N× N be defined by
∇^2_(i,j),(k,l)ℓ(B) = ∂^2ℓ(B)/∂ B_ij∂ B_kl,
and define the matrix H_(i,j),𝒢∈^G× G with (i,j)∈(𝒢×𝒢)^C such that
(H_(i,j),𝒢)_k,l = Tr{(∇^2_(i,j),(𝒢,𝒢)ℓ(B^⋆))
Λ_(k,l),(·,·)}, 1≤ k,l≤ G,,
where Λ∈^G× G⊗^G× G is a tensor such that Mat(Λ) is a pseudoinverse of Mat(∇^2_(𝒢,𝒢),(𝒢,𝒢)ℓ(B^⋆)), and Mat is the operation that unfolds the entries of a tensor Λ into a G^2× G^2 matrix. The matrix H_(i,j),𝒢 measures how well the variable corresponding to the edge (i,j) can be represented by the variables in the active subgraph.
assumpAssumption
[Restricted Strong Convexity]
There exists a set C⊂^N× N with B^⋆∈ C, and constants m>0, L̃<∞ such that
∑_i,jΔ_i,jTr{(∇^2_(i,j),(·,·)ℓ(B))Δ}
≥ m Δ_2^2, ∀ B∈ C∩ℳ,Δ∈ C∩ℳ
∇^2ℓ(B) - ∇^2ℓ(B^⋆)_2 ≤L̃B-B^⋆_2, ∀ B∈ C.
[Irrepresentability] There exists a constant 0<τ<1 such that
max_i∈𝒢^C(∑_k=1^G(H_(i,j),𝒢)_k·_2)_j=1^N_2= 1-τ<0.
This version of the irrepresentability condition corresponds to the one usually employed in group lasso penalties <cit.>, but as we will see later, due to overlaps in the groups it further requires a lower bound on ρ to work for model selection.
The first two assumptions are stated directly as a function of the loss for a fixed design case, but they can be substituted with bounds in probability for the case of random designs. In order to obtain rates of convergence, we do require a distributional assumption on the first derivative of the loss. This assumption can be substituted with a bound on max_i∇ℓ(B^⋆)_(i)_2 (see Lemma <ref> in the Appendix).
[Sub-Gaussian score function] Each pair in the sample (A,Y) is independent and comes from a distribution such that the entries of the matrix ∇ℓ̃(Y,A;B^⋆) are subgaussian. That is, for all t>0 there is a constant σ^2>0 such that
max_i,j(∇_ijℓ̃(Y, A; B^⋆)_∞>t) ≤ 2exp(-t^2/σ^2) .
With these assumptions, we establish consistency and correct model selection. The proof is given in the Appendix..
propProposition
Suppose Assumptions <ref> and <ref> hold.
(a) Setting the penalty parameters as λ = c_1σ√(log N/n)min{√(N)/1+ρ, 1/ρ} and ρ≥ 0 for some constant c_1>0, with probability at least 1-2/N the optimal solution of (<ref>) is unique and satisfies
B̂-B^⋆_2 = O_P(σ N√(log N/n)).
(b) Suppose Assumption <ref> also holds. If n> c_2G^2σ^2log N for a constant c_2>0, setting the penalty parameters as λ = c_3σ√(log N/n)min{√(N)/1+ρ, 1/ρ} for some constant c_3>0, and
ρ > 1/τ - 1/√(G) ,
then
B̂-B^⋆_2 = O_P(σ G√(log N/n)),
P(𝒢̂⊆𝒢) = 1-2/N.
The part of the penalty associated with ρ causes the solution to be sparse. Due to the overlap in the groups, a small value of ρ will usually not result in zeros in the solution of the problem (<ref>). The lower bound on ρ in (<ref>) ensures that the irrepresentability condition of <cit.> holds (see Lemma <ref> in the Appendix).
Proposition <ref> ensures that, with high probability, all edges estimated to have non-zero weights are contained in the active subgraph, and quantifies the error in estimating the entries of B^∗. To ensure that all active nodes are recovered, at least one edge corresponding to each active node needs to have a non-zero weight. A similar result can be obtained to guarantee recovery of all active nodes under a stronger assumption that the magnitude of the non-zero entries of B^⋆ is bounded below by |B_ij^⋆|>c_4 G^2λ for a constant c_4. The condition in part (b) requires a sample size that grows faster than the size of the active subgraph, which in practice can be much smaller than the size of the network, making the method suitable for our applications in which the sample size is limited and the number of nodes is large.
§ NUMERICAL RESULTS ON SIMULATED NETWORKS
In this section, we evaluate the performance of our method using synthetic networks. We are interested in assessing the ability of the method to correctly identify predictive edges, its classification accuracy, and comparisons to benchmarks.
We compare the different methods' edge selection performance in simulations using area under the curve (AUC).
Brain connectomic networks are characterized by organization of nodes into communities <cit.>, in which nodes within the same community tend to have stronger connections than nodes belonging to different communities. In order to generate synthetic networks that mimic this property, we introduce community structure using the stochastic block model (SBM) <cit.>. Before generating edges, we assign each node a community label, C_i, where C_i∈[K] for each i∈[N]. The node assignments are the same for all networks in the population. Given the community labels, network edges are generated independently from a distribution that only depends on the community labels of the nodes associated with each edge. Since fMRI networks are real-valued networks, we generate edge weights from a Gaussian distribution, rather than the standard Bernoulli distribution normally used with the SBM. Specifically, we draw each A_ij independently from N(μ_C_iC_j, σ^2), with μ∈R^K× K defined by
μ_kl = {[ 0.3, if k=l,; 0.1 if k ≠ l, ].
and σ^2=0.18. These values were chosen to approximately match the distribution of edge weights in our datasets (see Section <ref>). We set the number of nodes N = 300, with K=12 communities of size 25 each. We work with undirected networks, so the adjacency matrices are symmetric, with 44,850 distinct edges. Although our method is able to scale to larger networks, this moderately sized setting is already highly computationally demanding for many of the comparison benchmarks.
To set up two different class distributions, we select a set of active nodes 𝒢 first, defined by the nodes corresponding to some communities selected at random.
Then, we alter a set of differentiating edges ℰ selected at random from 𝒢×𝒢 with probability p. For each edge (i,j)∈ℰ, the distribution in class Y=-1 is N(μ_C_iC_j, σ^2), while the distribution in class Y=1 is changed to N(0.2, σ^2). Figure <ref> shows example expected adjacency matrices for each class. We then generate 50 networks from each class, resulting in a sample size of n=100. We vary G=|𝒢| by changing the number of communities selected, and the value of p, to study the effect of the number of active nodes and the density of differentiating edges inside a subgraph. The number of communities selected is set to 1 (|𝒢|=25), 2 (|𝒢|=50) and all communities (|𝒢|=300); note that in the last scenario all nodes are active and hence the node structure is not informative at all.
Since we are interested is identifying predictive edges and nodes, we use the AUC of the receiver operating characteristic (ROC) curve, for both edge and node selection. For each method, we calculate the ROC curve by changing its corresponding sparsity parameter to vary the number of edges selected . For a selection method ℳ and a sparsity parameter η let ℰ̂(ℳ, η) be the set of edges selected by ℳ, and 𝒢̂(ℳ, η) the set of active nodes corresponding to ℰ̂(ℳ, η). We calculate the edge false positive rate (EFPR) and edge true positive rate (ETPR) as
EFPR(ℳ, η) = |ℰ̂(ℳ, η)∩ℰ^C |/|ℰ^C| , ETPR(ℳ, η) = |ℰ̂(ℳ, η)∩ℰ|/|ℰ| .
The node FPR and TPR are calculated similarly.
We also evaluate the prediction accuracy of the methods. For each method, we use 5-fold cross-validation to select the best tuning parameter using the training data, and then compute the test error on a different dataset simulated under the same settings. The AUC and test errors reported are averaged over 30 replications.
Methods for benchmark comparisons on simulated networks were selected based on their good performance on real data (see Section <ref>). For our method (GC), we vary the parameter ρ and compare results for two different values of λ, .05 (GC1) and 10^-4 (GC2). For unstructured regularized logistic regression, we use the elastic net <cit.>, with a fixed α=0.02 (ENet). The performance of elastic net is not very sensitive to different values of α, but the number of variables that the method is able to select with large values is limited (including the case of α=1 that corresponds to the lasso). A support vector machine with ℓ_1 penalty <cit.> is also included (SVML1) for comparison, and additionally we evaluate the classification error of the original support vector machines (SVM) <cit.>. For both SVMs, we use linear kernels, which performed better than nonlinear ones. We also consider an independent screening method for variable selection based on the two sample t-statistic (T-stat). Finally, we also compare with the signal-subgraph method (SS) <cit.> which is the only other method that takes into account the network structure of the predictor variables. Note that the signal subgraph is designed for binary networks, so in order to apply it we thresholded each edge at the population mean. For each method, we fit 10 different tuning parameters to change the sparsity of the solution.
Figure <ref> shows the values of the average AUC for selecting edges (top) and nodes (bottom). For G=25 and 50,
as the proportion of differentiating edges in the active subgraph increases, methods that take into account network structure (GC1, GC2 and SS) slightly improve their edge AUC, since enforcing node selection also results in better edge selection, while the edge AUC remains constant for unstructured methods (ENet, T-stat and SVML1). On node selection, all methods improve the node AUC as the fraction of significant edges increases, but GC and SS have the largest gains. A similar trend is observed in classification error shown in Figure <ref>. All methods improve as the proportion of differentiating edges increases, but our method has the best performance overall. Our method performed the best with the larger value of λ (GC1) on variable selection, particularly when the set of active nodes is smaller, but both values of λ give very good classification performance. In the last scenario (G=300), all nodes are active so the node AUC is undefined, and the node structure is not informative at all. Although the performance of our method is no longer the best, it performs comparably to state of the art methods that do not use network structure.
In terms of computing time, since there are many contributing factors including the software choice for implementation and the tuning parameters, a fair comparison is difficult. We can roughly say that elastic net is the fastest, taking about a minute to run a cross-validation instance, while our method takes about 10 minutes on average, and the signal-subgraph takes more than an hour.
§ APPLICATION TO SCHIZOPHRENIA DATA
We analyze the performance of the classifier on the two different brain fMRI datasets previously described in Section <ref>.
The code of our classifier and the processed connectomics datasets are available at <https://github.com/jesusdaniel/graphclass>.
§.§ Classification results
First, we evaluate our method's classification accuracy. We use a nested 10-fold cross-validation to choose tuning parameters and estimate the test accuracy. The classifier is trained for a range of values of λ and ρ, with λ∈{10^-7, 10^-6.5, …, 10^-2} and ρ∈{10^-3, 10^-2.5, …, 10^2}. The value of γ in (<ref>) is set to 10^-5; we observed that setting γ to a small value speeds up convergence without affecting the accuracy or sparsity of the solution. Figure <ref> shows the average cross-validated accuracy, sparsity (fraction of zero coefficients) and node sparsity (fraction of inactive nodes), as a heat map over the grid of tuning parameter values. We observe that λ has little influence on sparsity, which is primarily controlled by ρ. Moreover, as Proposition <ref> suggests, values of ρ < 1 do not result in node selection. As expected, accuracy generally decreases as the solution becomes sparser, which is not uncommon in high-dimensional settings <cit.>. However, we can still achieve excellent accuracy with a substantially reduced set of features. In the COBRE dataset, the best accuracy is obtained with only 1886 edges (5.4%) but almost all nodes are active (260). On the UMich data, 29733 edges (85.6%) achieve the best performance, and all nodes are active. Choosing parameters by cross-validation often tends to include too many noise variables <cit.>, as we also observed in simulations. A commonly used technique to report solutions that still achieve good accuracy with a substantially reduced set of features is the so-called “one-standard-error rule" <cit.>, in which one selects the most parsimonious classifier with cross-validation accuracy at most one standard error away from the best cross-validation accuracy. Figure <ref> shows the solutions for each dataset obtained by this rule. Nodes are ordered by brain systems (see Figure <ref>). The fitted solution for COBRE has 549 non-zero coefficients (1.56%) and 217 active nodes (82.5%), while the UMich solution has 11748 non-zero entries (33.8%), and all nodes are active. Note that when many variables are selected, the magnitude of the coefficients becomes small due to the grouping effect of the penalty <cit.>.
We also compared our method to benchmarks (Table <ref>), using the same methods as in the previous section and training and evaluating all methods using with the same nested 10-fold cross-validation. For SVM, we tested different kernels, including graph aware kernels <cit.>, but in most cases local kernel methods were no better than random guessing. We additionally included random forests and a method based on global and local network summaries previously proposed as features for classifying brain data <cit.>. For the latter, because our dataset is much larger, we only considered global and node features proposed in <cit.>, which resulted in about 30,000 features per individual, and omitted edge features. <cit.> evaluated their classifiers on a different parcellation of the COBRE data, and we do not include their methods since they are based on the assumption of equally spaced nodes and cannot be directly applied to our data. Their reported accuracy of 71.9% and 73.5% for the COBRE data is substantially lower than our method, although the results are not directly comparable.
Results in Table <ref> show that most methods performed better on the COBRE dataset than on the UMich dataset, which can be partially explained by the different sample sizes and possibly noise levels. Besides differences in sample size and demographic characteristics (Table <ref>), the COBRE dataset is more homogeneous as it was collected using identical acquisition parameters, whereas the UMich dataset was pooled across five different experiments spanning seven years.
Our method performs very well on both datasets, particularly among methods that do variable selection. SVMs, which use the hinge loss, perform well too, and generally outperform methods using the logistic loss. Our penalty can be combined with any loss, so we could also include our penalty combined with hinge loss which might potentially improve classification accuracy, but we do not pursue this direction, for two reasons: one, our method is close to SVM + L1 as it is (better on COBRE, slightly worse on UMich but the difference is within noise levels), and because solutions based on logistic loss are generally considered more stable and preferable for variable selection <cit.>. In Figure <ref>, we plot cross-validated classification accuracy of these methods as a function of the number of variables selected. For the COBRE data, as we have observed before, good accuracy can be achieved with a fairly small number of edges, and the noisier UMich data requires more edges. In all cases, our method uses fewer nodes than the others, as it is designed to do so.
Ultimately, assessing significance of the selected variables is necessary, which is in general a difficult task in high-dimensional settings and an active area of research (see for example <cit.>). In brain connectomics, it is particularly challenging to identify significant variables because of small sample sizes <cit.>. Here we employ stability selection <cit.> which can be shown to control a type of false discovery rate by employing many rounds of random data splitting and calculating the probability of each variable being selected. Some versions of this method have been theoretically studied, and upper bounds on the expected number of variables with a low selection probability that are included in the final solution (i.e., errors) have been derived under mild conditions <cit.>. We implemented the version of stability selection proposed by <cit.>, with values of λ and ρ obtained by cross-validation on the COBRE data, and by the “one standard error rule" on the UMich dataset, since stability selection is most relevant to sparse solutions. However, one of the advantages of stability selection is that it is not sensitive to the initial choice of tuning parameters, and changing tuning parameters only slightly alters the ordering of variables with the largest selection probabilities.
The edges with the 15 largest selection probabilities are reported in Table <ref>.
Using the results of <cit.> (equation 8), we estimated that the expected number of falsely selected variables (variables with a probability of selection smaller than the estimated) is bounded by 6.1 for the COBRE dataset and 9.7 for the UMich data, which also suggests that results on the UMich data might be less reliable. While the two datasets yield somewhat different patterns of edge selection, it is notable that the default mode network (5) was often selected in both. This network has been consistently implicated in schizophrenia <cit.>, as well as other psychiatric disorders, possibly as a general marker of psychopathology <cit.>. In the COBRE dataset, edges were also selected from the fronto-parietal task control region (8), previously
linked to schizophrenia <cit.>. These results coincide with the findings of <cit.> on a different parcellation of the same data, which is an encouraging indication of robustness to the exact choice of node locations. Some of the variables with the highest estimated selection probabilities appear in the uncertain system (-1), in particular in the cell connecting it with salience system (9), which suggests that alternative parcellations that better characterize these regions may offer a better account of the schizophrenia-related changes.
Additionally, sensory/somatomotor hand region (1) and salience system (9) also stand out in the UMich data, and these are networks that have also been implicated in schizophrenia <cit.>.
While results in Table <ref> do not fully coincide on the two datasets, there are clear commonalities. Table <ref> compares classification accuracy when the classifier is trained on one dataset and tested on the other (with the exception of the intercept, since the datasets are not centered in the same way, which is fitted on a part of the test data, and the test error is then computed via 10-fold cross-validation). While the accuracy is lower than when the same dataset is used for training and testing, as one would expect, it is still reasonably good and in fact better than some of the benchmark methods even when they train and test on the same data. We again observe that the COBRE dataset is easier to classify.
Figure <ref> shows the active nodes in the COBRE dataset (marked in green), corresponding to the endpoints of the edges listed in Table <ref>. We also identified a set of 25 nodes that are not selected in any of the sparse solutions with cross-validation accuracy within one standard error from the best solution (marked in purple). These consistently inactive nodes are mostly clustered in two anatomically coherent regions.
§ DISCUSSION
We have presented a method for classifying graphs with labeled nodes, motivated by brain connectomics but generally applicable to any setting with such graphs. The distinct feature of our method is that it is graph-aware, aiming to select a sparse set of both edges and nodes, but it is general in the sense that it does not rely on the spatial structure of the brain. The method is computationally efficient since the regularization we use is convex, and the solution is implemented with efficient optimization algorithms. These properties guarantee fast convergence to the solution, making the methods scalable to networks with thousands of nodes, which is enough to deal with many of the brain atlases usually employed in neuroimaging (see for example <cit.>). Statistically, the rate of convergence depends on the number of active nodes only, not the total number of nodes, which allows for accurate results with even moderate sample sizes if the active node set is small.
The results we obtained on the schizophrenia data are generally in agreement with previous studies. In particular, the default mode network has been consistently implicated in schizophrenia and many other psychiatric disorders <cit.>. While different subnetworks were implicated by the two different datasets, we are still able to predict the disease status fairly accurately by training on one dataset and testing on the other. The differences between the two datasets may reflect real differences in samples collected at different sites and in different experiments, as significant pathophysiological heterogeneity occurs for all psychiatric diagnoses, or they may simply reflect type 2 errors.
Our methods work very generally with a sample of networks with labeled nodes and associated responses. The many pre-processing steps inevitable when dealing with fMRI data always add some uncertainty, and pre-processing decisions can potentially affect downstream conclusions. We aimed to somewhat mitigate this by using ranks, which are more robust and showed a slightly better performance on our datasets. Another option, when practical, is to compare multiple pre-processing pipelines, and/or multiple measures of connectivity, to further validate results. Our method's independence of these particular choices and its computational efficiency make it an attractive option for such comparisons.
§ ACKNOWLEDGMENTS
This research was supported in part by NSF grant DMS-1521551, ONR grant N000141612910, and a Dana Foundation grant to E. Levina, NSF training grant DMS-1646108 support for D. Kessler, as well as by computational resources and services provided by Advanced Research Computing at the University of Michigan, Ann Arbor. S. F. Taylor's research is supported by the National Institute of Mental Health (R01MH064148, R21MH086701, R21MH101676), the Boledovich Schizophrenia Research Fund and University of Michigan Clinical Translational Science Award (UL1RR024986).
apalike
§ OPTIMIZATION ALGORITHM DETAILS
The optimization procedure for solving the penalized prediction problem introduced above consists in a proximal algorithm, and the steps are detailed in Algorithm <ref>. Each step requires to solve a further convex optimization problem via ADMM. The exact solution of the steps of this method is shown in Algorithm <ref>.
§ PROOFS
Here we prove the bounds on Frobenius norm error and probability of support selection in Proposition <ref>, following the framework of <cit.> based on geometrical decomposability. A penalty Ω is geometrically decomposable if it can be written as
Ω(B)=h_A(B)+h_I(B) + h_E^⊥(B)
for all B, with A,I closed convex sets, E a subspace, and h_C the support function on C defined as h_C(B)=sup{⟨ Y,B⟩| Y∈ C.} .
The proof proceeds in the following steps.
Lemma <ref> shows that an equivalent form of our penalty (<ref>) is geometrically decomposable, allowing us to use the framework of <cit.>.
Lemma <ref> shows the Assumption <ref> together with a lower bound on ρ imply that the irrepresentability assumption of <cit.> holds. Assumption <ref> is directly on the entries of the loss Hessian, which simplifies the very general form of the assumption in <cit.>. Lemma <ref> gives a bound on the entries of the loss gradient under the sub-gaussianity assumption <ref>. Lemma <ref> gives explicit bounds for the compatibility constants that appear on Theorem 1 of <cit.>. Finally, we combine these results to prove Proposition <ref>.
Without loss of generality, to simplify notation we assume that 𝒢={1,…,G}, that is, the active subgraph is in the first G rows of the matrix.
The penalty (<ref>) can be written as geometrically decomposable.
We use an equivalent formulation of the penalty in which every coefficient is penalized only once. Let B',B”∈^N× N be matrices such that the upper triangular part of B” and the diagonals of B' and B” are zero. Define
Ω̃(B',B”) = ∑_i=1^NB'_(i)_2 + ρB”_1,
and E = {(B',B”)∈^N× 2N : B' = B'^T, B”_ij=B'_ij, for i<j and B”_ij=0 for i≥ j}.
Denote by R the transformation from ^N× N to ^N× 2N that replicates entries appropriately,
(RB)_ij = {[ B_ij if 1≤ j≤ N; B_i(j-N) if j> N.; ].
Therefore, for any B∈^N× N, we can uniquely define RB = (B',B”) such that Ω(B)=Ω̃(B',B”).
We then show that Ω̃ is geometrically decomposable. Moreover, for any (B',B”)∈ E we can define R^-1, so the penalties Ω and Ω̃ on E are equivalent. Define the sets A,I⊂^N× 2N such that
A = {(B',B”) : max_i∈𝒢B_(i)'_2 ≤ 1, max_i∈𝒢^CB_(i)'_2=0, .
. max |B”_ij| ≤ρ, B_ij” = 0,(i,j)∈(𝒢×𝒢)^C },
I = {(B',B”) : max_i∈𝒢^CB_(i)'_2 ≤ 1, max_i∈𝒢B_(i)'_2=0,.
. max |B”_ij| ≤ρ, B_ij” = 0,(i,j)∈𝒢×𝒢} .
Letting ⟨ Y,(B',B”)⟩=Tr(Y'B'^T) + Tr(Y”B”^T), combining the arguments of <cit.> for lasso and group lasso penalties,
h_A(B',B”) = ∑_i∈𝒢B'_(i)_2 + ρ∑_(i,j)∈𝒢×𝒢|B”_ij|,
h_I(B',B”) = ∑_i∈𝒢^CB'_(i)_2 + ρ∑_(i,j)∈(𝒢×𝒢)^C|B”_ij|,
h_E(B',B”) = {[ 0 if (B',B”)∈ E,; ∞ otherwise. ].
Hence, Ω can be written as a geometrically decomposable penalty
Ω(B) = Ω̃(B',B”) = λ{h_A(B',B”)+h_I(B',B”)+h_E(B',B”)}.
We introduce some notation in order to state the irrepresentability condition of <cit.>. For a set F⊂^N× 2N and Y∈^N× 2N, denote by γ_F(Y) = inf{λ>0 : Y∈ F} the gauge function on C. Thus,
γ_I(B',B”) = max{max_i∈𝒢^CB_(i)'_2, 1/ρmax_(i,j)∈(𝒢×𝒢)^C|B_ij”| } + 1_I(B',B”),
where 1_I(B)= 0 if B ∈ I and ∞ otherwise. Define
V(Z)=inf{γ_I(Y) : Z-Y∈ E^⊥,Y∈^N× 2N}
for Z∈^N× 2N. Let ℳ̃=E∩span(I)^⊥ be the set of matrices with correct support in the extended space ^N× 2N, similarly to ℳ in (<ref>).
Denote by P_M and P_M^⊥ the projections onto ℳ̃ and ℳ̃^⊥. Define the function ℋ(Z):^N× N→^N× N as
ℋ(Z)_ij = {[ H_(i,j),𝒢 (P_M Z)_𝒢,𝒢 if j∈𝒢,; 0 otherwise. ].
where H_(i,j),𝒢 is the matrix defined in (<ref>). The Irrepresentability Assumption 3.2 of <cit.> requires the existence of 0<τ̃<1 such that
sup_Z∈ A V{P_M^⊥(Rℋ(Z) - Z)} < 1 - τ̃.
For a support function h, denote by ∂ h(M)=⋃_Y∈ M∂ h(Y) the set of subdifferentials of h in M. Note that ∂ h_A(M) = A, since 0∈ M and ∂ h_A(0) = A.
If Assumption <ref> holds and ρ>1/τ-1/√(G), then there exists 0<τ̃<1 such that (<ref>) holds.
Since V is sublinear (Lemma 3.3 of <cit.>),
sup_Z∈ A V{P_M^⊥(Rℋ(Z) - Z)}≤sup_Z∈ A V{P_M^⊥(Rℋ(Z))} + sup_Z∈ A V{P_M^⊥Z}.
To bound the first term, note that E^⊥={(Z',Z”)|Z_ij'+Z_ji'+Z_ij”=0, j < i}. Hence,
V(Y',Y”) = inf{γ(U',U”): U_ij'-Y_ij^(1)+ U_ji'-Y_ji' + U_ij”-Y_ij”=0, j < i }
≤ inf{γ(U',U”): U_(i)' = Y_(i)', i∈𝒢^C; U_𝒢^C,𝒢^C” = Y_𝒢^C,𝒢^C”;.
. (U',U”)- (Y', Y”)∈ E^⊥}
≤ max{max_i∈𝒢^CY_(i)'_2, 1/ρ Y_𝒢^C,𝒢^C”_∞}.
Therefore,
V(P_M^⊥(Rℋ(Z))) ≤ max{max_i∈𝒢^C(P_M^⊥(Rℋ(Z)))_(i)^(1)_2, 1/ρ(P_M^⊥(Rℋ(Z)))^(2)_𝒢^C,𝒢^C_∞}
= max_i∈𝒢^Cℋ(Z)_(i)_2,
which implies that
sup_Z∈ A V(P_M^⊥(Rℋ(Z))) ≤ sup_Z∈ A{max_i∈𝒢^Cℋ(Z)_(i)_2}
≤ sup_B∈^G× G,B_(i)_2≤ 1{max_i∈𝒢^C(H_(i,j),𝒢B)_j=1^N_2}
≤ max_i∈𝒢^C(∑_k=1^G(H_(i,j),𝒢)_k·_2)_j=1^N_2 = 1-τ.
Let Z=(Z',Z”)∈ A. Without loss of generality, assume that Z'_𝒢,𝒢=Z”_𝒢,𝒢=0 (note that these entries do not change V(P_M^⊥Z)). Therefore, P_M^⊥Z=Z. Hence,
V(Z) = inf{γ(U',U”): (U',U”)∈ I, (U',U”)-(Z',Z”)∈ E }
= inf{γ(U',U”): U'_ij + U”_ij=Z'_ji, 1≤ j ≤ G, G< i ≤ N }
= inf{max{max_i∈𝒢^CU'_(i)_2,1/ρmax_j ∈𝒢
i∈𝒢^C|U”_ij|} : U'_ij + U”_ij=Z'_ji, j ∈𝒢, i∈𝒢^C }
≤ inf{max{max_i∈𝒢^CU'_(i)_2,1/ρmax_j ∈𝒢
i∈𝒢^C|U”_ij|} : U'_ij + U”_ij=1, j ∈𝒢, i∈𝒢^C}
The last bound from |Z'_ji|≤ 1 and no longer depends on Z.
It is easy to see that the minimum is attained when, for each i>G,
U'_(i)_2 = 1/ρ|U”_ij|, 1≤ j≤ G,
and therefore
V(Z) ≤√(G)/1+ρ√(G).
Moreover, if Z^∗∈ A is defined such that (Z^∗)^(1)_G+1,i=1 for i=1,…,G and 0 elsewhere, then V(Z^∗) achieves this bound, which shows that ρ > 1- 1/√(G) is a necessary condition for the irrepresentability to hold, even in the case where the entries of the Hessian that denote the information between active and inactive edges is zero. Therefore, plugging the bounds (<ref>) and (<ref>) into equation (<ref>), we obtain (<ref>) holds as long as 1-τ + √(G)/1+ρ√(G)<1, which implies that ρ>1/τ-1/√(G).
The next lemma establishes a bound on the dual norm of Ω of the loss gradient function under a sub-Gaussian assumption. Let Ω^∗ denote the dual norm of Ω, so Ω^∗(B) =sup{⟨ Y,B⟩| Y∈ C, Ω(Y)≤ 1.}.
Under Assumption <ref>,
P(Ω^∗(∇ℓ(B^⋆))>t) ≤ 2N^2 min{exp( - n(1+ρ)^2t^2/N(σ^2)), exp( - nρ^2t^2/σ^2)}.
By Hoeffding's inequality for sub-Gaussian variables, for all j,k and t>0,
(|∇_jkℓ(B^⋆)|>t) ≤(|1/n∑_i=1^n∇_jkℓ_i(B^⋆)|>t) ≤ 2exp(-nt^2/σ^2).
Note that (1+ρ)∑_i=1^NB_(i)_2 ≤Ω(B). Let Φ(B)=1/1+ρmax_i=1,… NB_(i)_2. Thus,
Ω^∗(B)≤sup_Y∈^N× N{Tr(YB):Ω_ρ=0(Y)≤1/1+ρ}=Φ(B).
In a similar manner, ρB_1≤Ω(B). Setting Ξ(B) = 1/ρB_∞, we have
Ω^∗(B)≤Ξ(B).
Using (<ref>) and setting Λ=∇ℓ(B^⋆),
P{Ω^∗(Λ)>t} ≤ P{Φ(Λ) > t}
= P{max_1≤ i ≤ NΛ_(i)_2>(1+ρ)t}
≤ P{max_1≤ i ≤ Nmax_j≠ i|Λ_ij|>(1+ρ)t/√(N)}
≤ 2N(N-1)exp{-n(1+ρ)^2t^2/2σ^2(N-1)},
the last inequality obtained by arguments similar to Lemma 4.3 of <cit.>. In the same way, we can also bound the previous quantity using (<ref>) by
P(Ω^∗(Λ)>t) ≤ P(Ξ(Λ) > t)
= P(Λ_(i)_∞>ρ t)
≤ N(N-1)exp(-nρ^2t^2/2σ^2).
Combining (<ref>) and (<ref>), we can obtain equation (<ref>).
For a semi-norm Ψ:^N× N→, let κ_Ω be the compatibility constant between Ψ and the Frobenius norm, defined as
κ_Ψ = sup{Ψ(B):B_2≤ 1, B∈ M},
and let κ_IC be the compatibility constant between the irrepresentable term and the dual norm Ω^∗ given by
κ_IC = sup{V(P_M^⊥(RℋZ-Z)) : Ω^∗(Z)≤ 1 } .
The following bounds on the compatibility constants hold:
κ_Ω = √(G) + ρ√(G(G-1)),
κ_Ω^∗ ≤ 1/1+ρ,
κ_IC ≤ 3 - τ.
Note that Ω(Y) is maximized on {Y:Y_2≤ 1} when all entries of Y have magnitude equal to 1/√(G(G-1)). Therefore
κ_Ω = G√(G-1/G(G-1)) + ρG(G-1)/√(G(G-1)) = √(G) + ρ√(G(G-1)).
Similarly, (<ref>) implies
κ_Ω^∗≤sup{1/1+ρmax_i∈𝒢B_(i)_2:B_2≤ 1 }≤1/1+ρ.
Finally,
κ_IC = sup{V(P_M^⊥(RℋZ-Z)) : Ω^∗(Z)≤ 1 }
≤ sup{V(P_M^⊥(RℋZ)) : Ω^∗(Z)≤ 1 } + sup{V(P_M^⊥(Z)) : Ω^∗(Z)≤ 1 }
≤ (1 - τ ) + 2 = 3-τ.
Part (a). Since B̂ minimizes the problem (<ref>),
ℓ(B̂) + λΩ(B̂) ≤ℓ(B^⋆) + λΩ(B^⋆).
Rearranging the terms, using Assumption <ref>, by the triangle inequality and the generalized Cauchy-Schwarz inequality,
0 ≥ ℓ(B̂) - ℓ(B^⋆) + Ω(B̂) - Ω(B^⋆)
≥ ⟨∇ℓ(B^⋆)^T, B̂-B^⋆⟩ + m/2B̂-B^⋆_2^2 - Ω(B̂ - B^⋆)
≥ -Ω(B̂ - B^⋆)Ω^∗(∇ℓ(B^⋆))- Ω(B̂ - B^⋆) + m/2B̂-B^⋆_2^2.
By the argument for computing κ_Ω in (<ref>), Ω(Y)≤{√(N) + ρ√(N(N-1))}Y_2. Rearranging the terms in (<ref>),
B̂-B^⋆_2≤2/m{√(N) + ρ√(N(N-1))}{λ + Ω^∗(B̂ - B^⋆)}.
For any ρ, setting λ = 2 √(σ^2log N/n)min{√(N)/1+ρ, 1/ρ}, by Lemma <ref>, with probability at least 1-2/N,
B̂-B^⋆_2 ≤ 4/m{√(N) + ρ√(N(N-1))}λ
≤ 4/m√(σ^2log N/n){√(N) + ρ N}min{√(N)/1+ρ, 1/ρ}
≤ 4/m√(σ^2log N/n) Nmin{1+ρ√(N), 1 + 1/ρ√(N)}
≤ 4/m√(N^2σ^2log N/n).
Part (b). Lemma <ref> gives a geometric decomposition of the penalty. Therefore, we can directly use Theorem 3.1 of <cit.>, since Lemma <ref> also ensures that their irrepresentability condition holds. Thus,
B̂-B^⋆_2≤2/mκ_Ω(1 + τ/4κ_IC)λ,
and 𝒢̂⊆𝒢 as long as
4κ_IC/τΩ^∗(∇ℓ(B^⋆)) < λ < m^2τ/2Lκ_Ω^2κ_Ω^∗κ_IC(1+τ/4κ_IC)^-2.
Setting
λ = 8κ_IC/τ√(σ^2log N/n)min{√(N)/1+ρ, 1/ρ},
using a similar argument than (<ref>), the left hand size of (<ref>) holds with probability at least 1-2/N. The right hand side of (<ref>) holds as long as the sample size satisfies
n > C(L, m, τ, κ_Ω, κ_Ω^∗, κ_IC) (√(G) + G)^2σ^2log N,
with C(L, m, τ, κ_Ω, κ_Ω^∗, κ_IC)>0 a positive constant. Therefore, claims (<ref>) and (<ref>) follow.
§ DATA AQUISITION AND PRE-PROCESSING
§.§ Subjects and imaging
§.§.§ The COBRE data
Raw anatomic and functional scans from 146 subjects (72 psychosis patients and 74 healthy control subjects) were downloaded from a public database (<http://fcon_1000.projects.nitrc.org/indi/retro/cobre.html>). Four subjects coded as ambidextrous (2 patients, 2 controls) were excluded to yield 70 psychosis patients and 72 controls for analysis. To enter the COBRE dataset, subjects had a diagnosis of either schizophrenia or schizoaffective disorder and were without histories of neurological disorder, mental retardation, severe head trauma with more than 5 minutes loss of consciousness and substance abuse/dependence within the last 12 months.
In the primary sample, two schizophrenic (SCZ) subjects and one healthy control (HC) subject had insufficient voxels in the cerebrospinal fluid (CSF) segmentation on the CSF, and they were dropped from additional analyses. Two additional SCZ subjects were excluded for scrub ratios (see discussion of scrubbing routine in fMRI Data Analysis) greater than 0.6, leaving 38 SCZ subjects and 42 HC subjects for the analysis. In the replication sample, 15 psychosis patients and two control subjects were excluded for scrub ratios greater than 0.6; one patient was excluded with incomplete data, leaving 54 SCZ and 69 HC subjects for analysis (see Table <ref>).
A full description of the imaging parameters for the COBRE dataset is available online at the link provided above and in <cit.>.
§.§.§ The UMich data
Subjects were selected from experiments conducted by Professor Stephan F. Taylor at the University of Michigan between 2004 and 2011 for task-based fMRI studies, which included resting state scans. Thirty-nine stable outpatients were selected with DSM-IV schizophrenia or schizoaffective disorder (SCZ) <cit.>. Forty healthy comparison (HC) subjects, without a lifetime history of Axis I psychiatric disorders <cit.>, were selected to approximate the age range, gender distribution and family education level of the patients. Prior to initial data collection, all subjects gave written, informed consent to participate in the protocol approved by the University of Michigan institutional review board (IRBMED).
MRI scanning occurred on a GE 3T Signa scanner (LX [8.3] release, General Electric Healthcare, Buckinghamshire, United Kingdom). Functional images were acquired with a T2*-weighted, reverse spiral acquisition sequence (gradient recalled echo, TE=30 msec, FA=90 degrees, field of view=22 cm, 40 slices, 3.0mm thick/0mm skip, equivalent to 64 x 64 voxel grid – yielding isotropic voxels 3 mm on edge). Because the data were acquired across different experiments, acquisition parameters differed slightly in the aggregate sample: 240 volumes at TR=1500 msec (11 SCZ, 10 HC), 180 volumes at TR=2000 msec (17 SCZ, 16 HC) and 240 volumes at 2000 msec (14 SCZ, 17 HC). Acquisitions were acquired in the resting state with eyes open and fixated on a large ‘plus’ sign projected on a monitor.
§.§ Pre-processing
We first performed standard pre-processing steps. All scans were slice-time corrected and realigned to the 10th image acquired during a scanning session <cit.>. Subsequent processing was performed with the Statistical Parametric Mapping SPM8 package (Wellcome Institute of Cognitive Neurology, London). Anatomic normalization was done with the VBM8 toolbox in SPM8, using the high resolution structural scans obtained for both datasets. Normalizing warps were applied to the co-registered, functional volumes, which were re-sliced and smoothed with an 8 mm isotropic Gaussian smoothing kernel. To assess and manage movement, we calculated the frame-wise displacement (FD) <cit.>, for all 6 parameters of rotation and translation. We used a scrubbing routine to censor any frame with FD > 0.5 mm from the regression analysis described below, yielding a scrub ratio for each subject. Three-compartment segmentation of the high-resolution structural image from the VBM8 normalization was applied to the functional time series to extract cerebral spinal volume (CSF) and white matter (WM) compartments, which were then subjected to a principal component analysis to identify the top 5 components in each <cit.>, which should correspond to heart rate and respiratory effects on global signal <cit.>. Multiple regressions were applied to the time series to remove the following nuisance effects: linear trend, 6 motion parameters, their temporal derivatives, the quadratics of these 12 parameters, 5 components from the PCA of CSF, 5 components of PCA of WM, followed by band pass filtering from 0.01 – 0.1 Hz, and then motion scrubbing. For each 4D data set, time courses were then extracted from 10 mm diameter spheres based on the 264 sets of coordinates identified by <cit.>. From these time series, a cross-correlation matrix of Pearson r-values was obtained and Fisher's R-to-Z transformation was applied for each of the 264 nodes with every other node (for COBRE dataset, node 75 is missing). Finally, for each individual, edge weights were assigned to be ranks of these score, with edge scores ranked separately for each subject, and then these values were centered and standardized across the individuals. Ranks have been used previously in brain connectomic studies to reduce the effect of potential outliers <cit.>; although some information is lost with the rank transformation, we observed that while ranks do not increase the classification accuracy significantly, they tend to produce sparser solutions with a similar accuracy to Pearson correlations.
|
http://arxiv.org/abs/1701.08036v1 | 20170127124357 | Friction of rubber with surfaces patterned with rigid spherical asperities | [
"D. T. Nguyen",
"S. Ramakrishna",
"C. Fretigny",
"N. D. Spencer",
"Y. Le Chenadec",
"A. Chateauminois"
] | cond-mat.soft | [
"cond-mat.soft"
] |
[]antoine.chateauminois@espci.fr
1. Soft Matter Science and Engineering Laboratory (SIMM), UMR CNRS
7615,Ecole Supérieure de Physique et Chimie Industrielles (ESPCI), Université Pierre et Marie Curie, Paris (UPMC), France
2. Laboratory for Surface Science and Technology, Department of Materials, ETH Zurich, Wolfgang-Pauli-Strasse 10, 8093 Zurich, Switzerland
3. Manufacture Francaise des Pneumatiques Michelin, Centre de Technologies, Clermont-Ferrand, France
This paper reports on the frictional properties of smooth rubber substrates sliding against rigid surfaces covered with various densities of colloidal nano-particles (average diameter 77 nm). Friction experiments were carried out using a transparent Poly(dimethyl siloxane) (PDMS) rubber contacting a silica lens with silica nano-particles sintered onto its surface. Using a previously described methodology (Nguyen et al., J. of Adhesion 87 (2011) 235-250 ), surface shear stress and contact-pressure distribution within the contact were determined from a measurement of the displacement field at the surface of the PDMS elastomer. Addition of silica nano-particles results in a strong, pressure-independent enhancement of the frictional shear stress as compared to the smooth lens. The contribution of viscoelastic losses to these increased frictional properties is analyzed in the light of a numerical model that solves the contact problem between the rubber and the rough surface. An order-of-magnitude agreement is obtained between experimental and theoretical results, the latter showing that the calculation of viscoelastic dissipation within the contact is very sensitive to the details of the topography of the rigid asperities.
46.50+d Tribology and Mechanical contacts;
62.20 Qp Friction, Tribology and Hardness
Friction of rubber with surfaces patterned with rigid spherical asperities
A. Chateauminois^1
January 27, 2017
==========================================================================
§ INTRODUCTION
Rubber friction is a topic of huge practical importance in many applications, such as tires, rubber seals, conveyor belts and syringes, to mention only a few. As a consequence, frictional properties of soft elastomers have motivated several investigations for over half a century (for an historical perspective, the reader is referred to the paper by Sills et al. <cit.>). The velocity and temperature dependence of the frictional properties of commercial rubbers was first explored in early studies by mechanical engineers (see e.g. <cit.>). In a seminal work by Schallamach <cit.>, this dependence was accounted for by thermally and stress-activated pinning/depinning mechanisms between rubber molecules and the contacting surface. While Schallamach focused on molecular processes at the frictional interface, other studies <cit.> pointed out that a fraction of the energy dissipated during sliding motion is also due to viscoelastic losses resulting from the deformation of the soft rubber in the contact zone. These processes were first evidenced by Greenwood and Tabor <cit.> in a series of experiments, in which hard spheres and cones were sliding or rolling on well-lubricated rubber surfaces. The selected lubrication conditions ensured that the thickness of the thin lubrication film was larger than the amplitude of surface roughness. Under such conditions, most of the friction force is assumed to arise from deformational losses within the rubber. These experimental results were analyzed using a simple model based on an empirical estimate of the fraction of the input elastic energy that is lost by hysteresis. This model was recently refined by Persson <cit.>, using an approach in which the hysteretic losses are explicitly taken into account from the relaxation spectrum of the viscoelastic substrate.
Early experimental studies with single asperity contact were subsequently extended to the more complex situation of rubber sliding on microscopically rough surfaces. In a seminal work <cit.>, Grosch examined the velocity and temperature dependence of the friction of filled rubbers against hard surfaces. In the case of rough tracks, a maximum in friction was found to occur at a sliding velocity related to the frequency with which the asperities of the rough surface deform the rubber surface. This maximum was absent on a smooth track, thus reflecting the deformation losses induced by the passage of the asperities over the rubber surface. From a theoretical point of view, Fourier methods of analysis can be employed to develop linear viscoelastic stress and displacement solutions for use in rough contact problems. Using such approaches, exact solutions for the deformation component of friction have been derived, as an example, for periodic arrays of identical asperities sliding against a power law viscoelastic materials <cit.> or in the limiting case of a perfectly conforming contact between a rubber substrate and a stochastic surface <cit.>. A more general contact-mechanics model for randomly rough surfaces was recently developed by Persson <cit.>. Using a spectral description of the topography of the rough surfaces, this theory predicts how the component of friction force associated with hysteretic losses varies with velocity and contact pressure from an estimate of the actual contact area. Some experimental results tend to support this theory <cit.> but a detailed examination of the effects of surface topography on rubber friction remains very challenging in the case of randomly rough surfaces, whose characteristic length scales usually range over several order of magnitudes.
In this study, we take advantage of a technique developed by Huwiler et al. <cit.> and Kunzler et al. <cit.>, which involves the sintering of colloidal silica nanoparticles onto silica surfaces. Using this technique, surfaces covered with various densities of spherical asperities with well-defined sizes and height distribution can be prepared. To some extent, such surfaces are reminiscent of the model surfaces considered in the rough contact theory by Greenwood and Williamson <cit.>, in which spherical asperities with identical radius of curvature are assumed to be statistically distributed along the vertical direction. Experimentally, such patterned surfaces are of particular interest for rubber-friction studies because they offer the possibility to introduce roughness at a given length scale. Accordingly, the frequency distribution associated with the deformation of the rubber surface by the asperities is well controlled, as well as the volume of the viscoelastic substrate that is affected by hysteretic losses. This possibility is exploited here for a quantitative investigation of the hysteretic contributions to friction arising from localized viscoelastic dissipation at the nano-asperity scale. In a first section, the friction of such patterned silica surfaces against silicone rubber is investigated as a function of particle density, contact pressure and velocity. Using a previously developed contact-imaging methodology <cit.>, the shear and pressure distributions at the frictional interface are determined from a measurement of the displacement field at the surface of the PDMS rubber. From these results, the pressure dependence of the frictional shear stress is discussed. In a second part, the experimental results are analyzed in the light of a theoretical contact model, which allows the role of viscoelastic losses associated with substrate deformation by the nano-asperities to be evaluated.
§ EXPERIMENTAL AND NUMERICAL DETAILS
§.§ Materials
A commercially available, transparent poly(dimethyl siloxane) (PDMS) silicone (Sylgard 184, Dow Corning, Midland, MI) was used as an elastomeric substrate. In order to monitor contact-induced surface displacements, a square network of small cylindrical holes (diameter 20 μm, depth 5 μm and center-to-center spacing 400 μm) was produced on the PDMS surface by means of conventional micro-lithography techniques (see reference <cit.> for details). Under transmitted-light observation conditions, this pattern appears as a network of dark spots that are easily detected by means of image processing. In order to prepare these marked PDMS surfaces, a resin template with a network of cylindrical pillars is first fabricated on a silicon wafer by means of soft micro lithography. The reactive silicone mixture in stoichiometric proportions (10:1 by weight) is then directly molded onto this template and cured in an oven at 70^∘C for 48 hours. The specimen size is 6cm×3cm×1.5cm. Before use, PDMS specimens were thoroughly washed with isopropanol and subsequently dried under vacuum.
Millimeter-sized contacts were achieved between the PDMS substrate and plano-convex silica lenses (Melles Griot, France) with a radius of curvature of 9.4 mm. The r.m.s. roughness of the as-received lens is about 0.3 nm, as measured from 1 × 1 μm^2 AFM pictures. The silica lenses were decorated with sintered silica nano-particles using a previously developed procedure fully described in references <cit.>. The method relies upon the simple electrostatic attraction of negatively charged silica nanoparticles onto the silica lens surface, previously rendered positively charged by coating with poly(ethylene imine). For that purpose, the coated lens was immersed in an aqueous silica nanoparticles suspension (diameter ≈ 73 nm, purchased from Microspheres - Nanospheres, Cold Spring, NY). In order to achieve different particle densities, the immersion time of the lens was varied between 10 and 30 minutes. After particle adsorption, the lenses were dried with nitrogen and sintered at 1080^∘C for two hours to remove any polymer on the surface and to partially sinter the nanoparticles to the surface. A smooth reference lens was also prepared using the same procedure, but without any nano-particles. This procedure ensured that the surface of the smooth lens was in the same physical and chemical state as that of the patterned surfaces. The patterned lenses were characterized by atomic force microscopy (AFM) and scanning electron microscopy (SEM). AFM images shown in Figure <ref> indicate a uniform distribution of nanoparticles on the lens surface with only a few aggregates. The average density of the nanoparticles increases from 7.3 particles/μm^2 to 29.5 particles/μm^2 when the adsorption time is changed from 10 to 30 minutes. In AFM measurements, the tip end radius can be estimated to be of the order of magnitude of the nanoparticles sizes. As a result, it is not possible to extract any accurate information regarding the shape of the nanoparticles, except their heights relative to the lens surface. The measured average particle height above the substrate is 55 ± 10 nm. The average particle diameter (77 nm) was obtained independently from SEM observations.
§.§ Friction and displacement-field measurements
Friction experiments were carried out using a home-built device, which is described in reference <cit.>. Experiments were performed under imposed normal load (between 1.4 and 5.3 N) and velocity (between 0.01 and 1 mm s^-1). The PDMS substrate was displaced with respect to the fixed glass lens by means of a linear translation stage. Lateral displacement and force were continuously monitored with a non-contact laser transducer (Keyence, France) and a strain gage transducer (Entran, France), respectively. Images of the contact zone were continuously recorded through the transparent PDMS substrate by means of a zoom lens and a CMOS camera. This system was configured to a frame size of 1024x1024 pixels with frame rates ranging from 0.3 Hz to 30 Hz. The measured friction forces were observed to vary slightly from one PDMS specimen to another and also as a function of the age of the specimen. As a consequence, all the experimental data to be compared in this paper were obtained using a single PDMS specimen and in a limited time.
The measurement of the surface lateral displacement field was based on the detection of the markers at the surface of the PDMS substrate by means of image processing. Image accumulation under steady-state sliding allows a spatial resolution of about 20 μm to be achieved. Vertical displacements within the contact zone were also deduced from a measurement of the indentation depth of the lens and a knowledge of its radius of curvature. The shear and contact-pressure distribution within the contact were determined from the measured displacement field with an already developed finite-element (FE) inversion procedure, taking into account the material and geometrical non-linearities of the problem. For full details regarding displacement field measurements and inversion procedure, the reader is sent to reference <cit.>.
§ EXPERIMENTAL RESULTS
§.§ Friction vs asperity density
Sintering nano-particles onto the silica lens systematically leads to an increase in the observed friction force. At the same time, the shape and area of the contact under steady-state sliding are also found to vary with the nano-particle density. In order to allow for a comparison between the various patterned surfaces, these changes in both the friction force and in the contact geometry were accounted for by considering the average frictional shear stress instead of the friction force. Here, the average shear stress is defined as τ=F_T/A, where A is the measured macroscopc contact area under steady-state sliding and F_T is the friction force. Figure <ref> shows the change in the measured average frictional shear stress, τ, as a function of the nano-particle density under imposed normal load and sliding velocity. Patterning the silica surface results in a clear enhancement of the shear stress: increasing the particle density up to 30 particles/μm^2 results in a twofold increase in the frictional shear stress as compared to the smooth contact. Moreover, this increase with particle density is compatible with a linear relationship, suggesting that nano-particles contribute to friction independently of each other.
The distribution of the shear stress within the contact was further considered from the inversion of the measured surface-displacement field. Figure <ref> shows a typical example of the shear and contact-pressure distribution achieved under steady-state sliding. A maximum in the contact pressure is clearly visible at the center of the contact-reminiscent of a Hertzian distribution. As indicated by the profile in Figure <ref>b, the frictional shear stress exhibits a gradient along along the sliding direction which is uncorrelated to the pressure distribution. This gradient has already been been reported for similar contacts <cit.> and it will be the topic of a separate study. Within the framework of the present study, it is without significance as it does not alter the main conclusion that the shear-stress distribution within both smooth and rough contacts is independent of the contact pressure. This feature is preserved for both nano-particle densities.
As shown in Figure <ref>, this result is further confirmed by a series of experiments where the normal load is varied from 1.4 N to 5.3 N for a given particle density (ϕ=29.5 particles/μm^2) without any change in the local shear stress within experimental accuracy. When two rough bodies are pressed together, contact is usually assumed to occur at discrete, localized, contact spots. Within such multi-contact interfaces, the level of local shear stress should therefore vary as a function of the contact pressure by virtue of the associated changes in the actual contact area. Here, the fact that the local shear stress does not vary with the contact pressure suggests that the contact between the smooth PDMS substrate and the patterned silica surface is nearly saturated, i.e. that the actual contact area does not significantly vary within the considered pressure range.
§.§ Friction vs velocity
Figure (<ref>) shows the changes in the average contact shear stress as a function of the imposed sliding velocity for a smooth and a patterned (ϕ=6.7 particles/μm^2) silica lens. In the case of the smooth lens, a weak, nearly logarithmic, velocity dependence is observed, as previously reported for similar smooth glass/PDMS contacts <cit.>. It turns out that the velocity dependence of the shear stress is slightly enhanced in the case of the patterned lens. This is further confirmed when the difference between the average shear stress of the rough and smooth contacts is considered, as shown by black squares in Figure <ref>. A potential explanation for this effect would be that some additional viscoelastic dissipation is induced on the scale of the sliding nano-asperities, as a result of localized surface deformation of the PDMS rubber. Accordingly, the PDMS surface would be strained by the nano-particles at a characteristic frequency of the order of v/d where v is the sliding velocity and d the particle diameter. Depending on the sliding velocity, strain frequencies in the range 10^2-10^4 Hz can be thus achieved locally on the PDMS surface in the sub-micrometer range. Although the selected PDMS is not a highly viscoelastic rubber, a significant increase in the shear loss modulus (G") is measured by DMTA over such a frequency range (see Appendix A). Nano-asperity-scale viscoelastic contributions to friction could therefore-at least partially-account for the observed increase in the shear stress when nano-particles are present on the silica lens. This hypothesis is further considered in the following section.
§ ROUGH-CONTACT MODEL
The above results indicate that frictional shear stress is significantly enhanced by the presence of nano-particles on the surface of the silica lens. Moreover, this increase is compatible with a linear dependence on the particle density, suggesting that the contributions of nano-particles to friction are additive. In the following section, we will discuss these results within the framework of the classical Bowden and Tabor 'two term' model <cit.>. Accordingly, the frictional force is assumed to arise from two independent contributions, denoted as the adhesive and the ploughing terms. The so-called adhesive term encompasses all the dissipative mechanisms occurring at the points of intimate contact between the solids, i.e. on length scales lower than the asperity size. The ploughing term corresponds to the force required to displace the rubber material from the front of the rigid nano-asperities. Here, it represents the contribution of the viscoelastic losses involved in the deformation of the rubber substrate by the nano-asperities. Rewritten in terms of shear stress, this model can be expressed as follows
τ=τ_a+τ_v
where τ_a is the adhesive term and τ_v is the viscoelastic (ploughing) term corresponding to deformation at the nano-asperity scale. In a first approach, the viscoelastic component, τ_v, can be evaluated by a simple scaling approach, in which the viscoelastic dissipation is assumed to occur within a volume of the order of a^3, where a is the radius of the contact formed between the rubber substrate and a nano-asperity. The energy, U dissipated during the deformation of a single asperity contact can thus be written as
U≈ E^"ϵ^2 a^3
where ϵ is the average contact strain and E^" is the loss component of the complex Young's viscoelastic modulus at some characteristic frequency of the order of v/a where v is the sliding velocity. Taking ϵ≈ a/R where R is the radius of the nano-asperity it comes
U ≈ E^"a^5/R^2
The energy U is dissipated when the asperity travels over a distance of the order of the contact size. The corresponding force can thus be written as
f_v≈dU/da=E^"a^4/R^2
Within the assumption of non-interacting asperities, the total viscoelastic shear stress can thus be expressed as follows
τ_v ≈ϕ E^"a^4/R^2
where ϕ is the number of asperities per unit surface area. Accordingly, the viscoelastic component of the frictional shear stress should be proportional to the particle density and to the loss modulus of the rubber substrate at the characteristic strain frequency imposed by the spherical asperities. The pressure dependence of the shear stress is embedded in the term a^4/R^2, which describes the local contact conditions on the asperity scale.
More refined simulations of the viscoelastic component of the frictional shear stress using the same ideas were carried out using a numerical model that solves the normal contact problem between a rigid rough surface and a smooth linear viscoelastic substrate. In this approach, the friction force is assumed to arise only from the viscoelastic dissipation resulting from the deformation of the substrate under the action of the contact pressure. Then, the corresponding frictional shear stress can be derived explicitly as a function of the displacements in Fourier space. In addition, displacements can be related to the applied contact pressure from the expression of the Green's tensor <cit.> in Fourier space <cit.> which simplifies to the following expression in the case of an incompressible substrate
ũ_̃z̃=G̃_̃z̃p̃
Here ũ and p̃ denote the Fourier transforms of the vertical surface displacement and contact pressure, respectively. G̃_̃z̃ corresponds to the Fourier transform of the Green's tensor component along the vertical direction. The main issue remains the estimation of the displacement field, which is in general unknown unless an intimate contact is achieved between the surfaces. In order to determine the vertical displacement field, we used a numerical method to solve the viscoelastic normal-contact problem. The algorithm was initially proposed by Polonski and Keer <cit.> and further detailed in reference <cit.>. The conjugate gradient is used and the calculation of the displacement is operated in Fourier space. The calculation of the friction force is performed as described in reference <cit.>. The problem is also written for steady-state conditions <cit.> and periodic boundary conditions are introduced. The advantage of this method is to provide an exact spectral description of the deformed surface, from which viscoelastic dissipation can be estimated. In addition, this contact model is also able to handle potential effects arising from elastic coupling between neighboring asperities. In order to account for viscoelasticity, the elastic Young's modulus of the substrate is replaced by the complex viscoelastic modulus in the expression of the Green's tensor component. In a spirit similar to that of Persson's model <cit.>, this contact model is thus based on a spectral description of the surfaces, which can theoretically incorporate the entire frequency spectrum involved in the deformation of the viscoelastic substrate.
Two-dimensional calculations are carried out using a flat surface covered with a random distribution of identical asperities with hemispherical (R= 38.5 nm) caps and a height of 55 nm as determined experimentally. The viscoelastic properties of the PDMS rubber are described using a generalized Maxwell model whose parameters are fitted to the experimental data (see Appendix A). It can be noted in passing that, in the experiments, the selected specimen size ensures that semi-infinite contact conditions are achieved during sliding experiments (i.e. the ratio of the substrate thickness to the contact radius is greater than ten <cit.>) in accordance with the theoretical contact model.
As a validation of the model, we first present some results corresponding to a suspended-state contact situation, in which contact only occurs at the top of the spherical capped asperities. From the results shown in Figure <ref>a, the relationship between the viscoelastic shear stress and contact pressure is seen to obey a power law dependence. The same conclusion holds for the dependence of the shear stress on the particles density (Figure <ref>b). Power-law fits of these numerical data provide τ_v ∝ p^1.43ϕ^-0.4. These results can be compared to the theoretical prediction of equation (<ref>). Indeed, if a Hertzian contact is assumed to occur between the deformable substrate and the rigid asperities, equation (<ref>) can be rewritten as
τ_v ≈ E^"R^-2/3( p/E')^4/3ϕ^-1/3
where E' is the storage component of the viscoelastic modulus at the characteristic loading frequency. Accordingly, τ∝ p^1.33ϕ^-0.33, which is very close to the prediction of numerical simulations. In the following section, we will consider a more complex situation, in which contact occurs between the rubber substrate and both the asperities and the base plane of the rough surface. This situation is likely to correspond to the experimental results and will be compared to the theoretical predictions.
§ COMPARISON TO EXPERIMENTAL DATA AND DISCUSSION
In order to extract the viscoelastic component, τ_v, from the measured shear stress, we assimilate the adhesive component τ_a into the frictional shear stress measured with the smooth lens. This assumption implies that the enhancement of τ_a that arises from the increased area of intimate contact in the presence of nano-asperities is neglected. This hypothesis is justified by a simple calculation, which shows that for the highest asperity density (30 particles/μm^2), the maximum increase in the actual (intimate) contact area would be only 15%, while the shear stress is increased by a factor of about two, compared to that on the smooth lens. Figure <ref> shows the calculated and experimental viscoelastic shear stress as a function of particle density. For all these calculations, partial (unsaturated) contact conditions were found to occur between the rubber substrate and the rough surface with the rubber touching both the top of the asperities and some parts of the flat base plane. The linear relationship between τ_v and particle density is retrieved by the numerical simulations, but with a slope that is about 3 times higher than in the experimental case. However, this semi-quantitative agreement between experimental and simulated data is reasonable, if one considers all the uncertainties associated with the model parameters (such as particle shape and determination of the viscoelastic behaviour law).
In particular, it is interesting to consider the fluctuations in the calculated viscoelastic shear stress that are induced by changes in the height of the particles above the flat surface. Results reported in Figure <ref> show that the level of viscoelastic dissipation is very sensitive to this parameter: a 15 nm increase in the asperity height above the surface can result in a twofold increase in the viscoelastic shear stress.
It is also interesting to compare the numerical prediction of Figure <ref> to a simple calculation using equation (<ref>) in the limiting case of an intimate contact between the rubber surface and hemispherical asperities. When a ≈ R, this expression reduces to
τ_v ≈ϕ G^" R^2
Accordingly, the slope of the τ(ϕ) relationship can be calculated from the radius of curvature of the nano-asperities and from the measured value of G^" at the characteristic frequency defined by v/R. For v=1 mm s^-1 and R=38.5 nm, the obtained value (dτ/dϕ≈ 4 10^-9 N) is about one order of magnitude lower than the value of the numerical simulations (dτ/dϕ≈ 5 10^-8 N). This discrepancy between the two calculations puts in question the relevance of the average frequency, v/a, to describe the viscoelastic response of the substrate at asperity scale. Depending on the shape of the asperity and on the contact condition, the strain frequency can in fact be distributed over a wide spectrum. This point becomes evident if the limiting case of hemispherical caps in intimate contact with the viscoelastic substrate is considered. In such a situation, infinite strain frequencies will be achieved at the periphery of the contact, while the frequency will vanish at the center of the contact. Such effects are also evidenced in more realistic numerical simulations, in which the ratio of the height, h, of the asperity to their radius of curvature, R, is varied at a constant asperity diameter (Figure <ref>). A strong increase in the calculated viscoelastic shear stress is observed when h/R → 1, i.e. when the tangent to the surface of the asperity becomes close to a vertical at the periphery of the asperity contact.
§ CONCLUSION
In this paper, we have investigated the frictional properties of a smooth rubber substrate sliding on a rigid surface covered with mono-disperse colloidal asperities. Such 'model' rough surfaces with well-controlled asperity shape and size offer the possibility to revisit, in simplified contact situations, the current theoretical description of rubber friction with rough surfaces. Here, the emphasis was put on the so-called hysteretic component to friction that arises from the localized viscoelastic deformation of the rubber surface by the rigid asperities. We have found that the observed increase in the shear stress in the presence of colloidal asperities can be accounted for semi-quantitatively by a viscoelastic contact model that is based on a spectral description of the rough surfaces. However, it turns out that the calculated shear stress is highly sensitive to the geometrical details of the rigid asperities. In particular, the high-frequency strain components corresponding to elevated asperity slopes seem to make a dominant contribution to hysteretic friction. As consequence of the uncertainties regarding the actual asperity shape and height distribution, as well as the viscoelastic properties of the rubber, it seems unrealistic to expect better than an order-of-magnitude estimate of the shear stress, even for such simplified model surfaces. More generally, these results highlight the problem of the accuracy of the current theoretical predictions of hysteretic friction in the much more complex case of statistically rough surfaces. It is likely that the associated spectral description of the surfaces does not allow for the level of accuracy required to yield more than order-of-magnitude estimates of the hysteretic friction force. In addition, our contact model, as others, is based on a linear viscoelastic description of the rubber behavior. The contribution of the finite strains that are likely to be achieved within the contact remains to be evaluated.
Part of this work was supported by the National Research Agency (ANR) within the framework of the DYNALO project (project NT09 499845). SR and NDS wish to thank the ETH Research Commission for their financial assistance. The authors are also grateful to B. Bresson (SIMM) for his kind help with the AFM measurements.
§ VISCOELASTIC PROPERTIES OF PDMS
The linear viscoelastic properties of the PDMS rubber were determined using Dynamical Mechanical Thermal Analysis (DMTA). PDMS disks (2 mm in thickness and 8 mm in diameter) are sheared at low strain (between 0.02% and 0.05% depending on the temperature) between the parallel plates of a rheometer (Anton Paar, MCR 501). Isothermal steps with 3^∘C increments have been carried out between -77^∘C and 23^∘C. At each isothermal step, the shear modulus is measured during a frequency sweep between 0.01 Hz and 50 Hz after thermal equilibration of the specimen during 10 minutes. Figure <ref> shows the resulting master curve at a reference temperature of 21^∘C. The solid lines correspond to the generalized Maxwell model fitted to the experimental data.
unsrt
10
sills2007
Sills, S., Vorvolakos, K., Chaudhury, M., Overney, R.: Molecular origins of
elastomeric friction.
In E. Gnecco, E. Meyer (eds.) Nanotribology: friction and wear on the
atomic scale. Springer Verlag (2007)
Ariano1930
Ariano, R.: The coefficients of friction between rubber and various materials.
part ii - gripping friction of rubber belting.
India Rubber J 79, 56–58 (1930)
roth1942
Roth, F., Driscoll, R., Holt, W.: Frictional properties of rubber.
J Nat Bureau Standards 28, 439–462 (1942)
Thirion1946
Thirion, P.: Les coefficients d'adherence du caoutchouc.
Revue Generale du Caoutchouc 23, 101–106 (1946)
schallamach1963
Schallamach, A.: A theory of dynamic rubber friction.
Wear 6, 375–382 (1963)
bueche1959
Bueche, A., Flom, D.: Surface friction and dynamic mechanical properties of
rubber.
Wear 2, 168–182 (1959)
greenwood1958
Greenwood, J., Tabor, D.: The friction of hard sliders on lubricated rubber:
the importance of deformation losses.
Proc Phys Soc 71, 989–1001 (1958)
persson2010
Persson, B.: Rolling friction for hard cylinder and sphere on viscoelastic
solid.
Eur Phys J E 33, 327–333 (2010)
Grosch1963a
Grosch, K.: The relation between the friction and viscoelastic properties of
rubber.
Proc Roy Soc A 274, 21–39 (1963)
grosch1963b
Grosch, K.: Relation between the friction and viscoelastic properties of
rubber.
Nature 197, 858–859 (1963)
Schapery1978
Schapery, R.: Analytical models for the deformation and adhesion components of
rubber friction.
Tire Sci Tech 6, 3–47 (1978)
golden1980
Golden, J.: Hysteresis and lubricated rubber friction.
Wear 65, 75–87 (1980)
Perrson2006
Persson, B.N.J.: Contact mechanics for randomly rough surfaces.
Surface Sci Rep 61, 201–227 (2006)
persson2001
Persson, B.N.J.: Theory of rubber friction and contact mechanics.
J Chem Phys 115, 3840 (2001)
lorenz2011
Lorenz, B., Persson, B., S.Dieluweit, Tada, T.: Rubber friction: comparison of
theory with experiments.
Eur Phys J E 34, 129 (2011)
Huwiler2007
Huwiler, C., Kunzler, T., Textor, M., Voos, J., Spencer, N.: Functionalizable
nanomorphology gradients via colloidal self assembly.
Langmuir 23, 5929–5935 (2007)
kunzler2006
Kunzler, T., Drobek, T., Sprecher, C., Schuler, M., Spencer, N.: Fabrication of
material independent morphology gradients for high-throughput applications.
Appl Surf Sci 253, 2148–2153 (2006)
greenwood1966
Greenwood, J., Williamson, J.: Contact of nominally flat surfaces.
Proc Roy Soc A 295, 1934–1990 (1966)
nguyen2011
Nguyen, D., Paolino, P., Audry, M.C., Chateauminois, A., Fretigny, C.,
Chenadec, Y.L., Portigliatti, M., Barthel, E.: Surface pressure and shear
stress field within a frictional contact on rubber.
J Adhesion 87, 235–250 (2011)
Chateauminois2009
Chateauminois, A., Fretigny, C.: Local friction at a sliding interface between
an elastomer and a rigid spherical probe.
Eur Phys J E 27, 221–227 (2008)
chateauminois2010
Chateauminois, A., Fretigny, C., Olanier, L.: Friction and shear fracture of an
adhesive contact under torsion.
Phys Rev E 81, 026106 (2010)
Bowden1958
Bowden, F., Tabor, D.: The Friction and Lubrication of Solids.
Clarendon Press, Oxford (1958)
briscoe1981
Briscoe, B.: Wear of polymers: an essay on fundamental aspects.
Trib Int 14, 231–243 (1981)
landau1986
Landau, L., Lifshitz, E.: Theory of Elasticity. Third Edition.
Butterworth Heinemann (1986)
carbone2009
Carbone, G., Lorenz, B., Persson, B., Wohlers, A.: Contact mechanics and rubber
friction for randomly rough surfaces with anisotropic statistical properties.
Eur Phys J E 29, 275–284 (2009)
Polonski1999
Polonsky, I., Keer, L.: A numerical method for solving rough contact problems
based on the multi-level multi-summation and conjugate gradient techniques.
Wear 231, 206–219 (1999)
Allwood2005
Allwood, J.: Survey and performance assessment of solution methods for elastic
rough contact problems.
J Trib 127, 10–23 (2005)
johnson1985
Johnson, K., Greenwood, J., Higginson, J.: The contact of elastic regular wavy
surfaces.
Int J Mech Sci 27, 383–396 (1985)
shuangbiao2007
Shuangbiao, L., Diann, H., Chen, W.W., Wang, Q.: Tribological modeling:
Application of fast fourier transform.
Trib Int 40, 1284–1293 (2007)
gacoin2006
Gacoin, E., Fretigny, C., Chateauminois, A.: Measurement of the mechanical
properties of thin films mechanically confined within contacts.
Trib Lett 21, 245–252 (2006)
|
http://arxiv.org/abs/1701.07910v2 | 20170127002242 | Combining Envelope Methodology and Aster Models for Variance Reduction in Life History Analyses | [
"Daniel J. Eck",
"Charles J. Geyer",
"R. Dennis Cook"
] | stat.AP | [
"stat.AP",
"stat.ME"
] |
Combining Envelope Methodology and Aster Models for Variance Reduction in Life History Analyses
Daniel J. Eck
Department of Biostatistics, Yale School of Public Health.
daniel.eck@yale.edu
Charles J. Geyer and R. Dennis Cook
Department of Statistics, University of Minnesota
Precise estimation of expected Darwinian fitness, the expected lifetime number
of offspring of organism, is a central component of life history analysis.
The aster model serves as a defensible statistical model for distributions of
Darwinian fitness. The aster model is equipped to incorporate the major life
stages an organism travels through which separately may effect Darwinian fitness.
Envelope methodology reduces asymptotic variability by establishing
a link between unknown parameters of interest and the asymptotic covariance
matrices of their estimators. It is known both theoretically and in
applications that incorporation of envelope methodology reduces asymptotic
variability. We develop an envelope framework, including a new envelope
estimator, that is appropriate for aster analyses.
The level of precision provided from our methods allows researchers to draw
stronger conclusions about the driving forces of Darwinian fitness from their
life history analyses than they could with the aster model alone.
Our methods are illustrated on a simulated dataset and a life history analysis
of Mimulus guttatus flowers is provided. Useful variance
reduction is obtained in both analyses.
Darwinian fitness; fitness landscape; envelope model; parametric bootstrap
§ INTRODUCTION
The estimation of expected Darwinian fitness
is a very important procedure in both biology and genetics.
The importance of this is not just limited to scientific disciplines, it is
important for public policy. With genetic theory and simulation studies,
<cit.> shows that, under certain conditions, a changing environment
leads to extinction of species. In a field study, <cit.> argued that
the predicted evolutionary response to predicted rates of climate change are
far too slow. In these papers, and all life history analyses of their kind,
expected Darwinian fitness is the response variable. The interesting
scientific conclusions are drawn from it.
In many life history analyses, values of expected Darwinian fitness are
plotted using a fitness landscape <cit.>. A fitness landscape is
the conditional expectation of Darwinian fitness given phenotypic trait values
considered as a function of those values. When fitness is the response
variable in a regression model and phenotypic traits are the covariates, the
fitness landscape is the regression function. Estimation of the fitness
landscape began with <cit.>.
They use ordinary least squares regression of fitness on phenotypes to
estimate the best linear approximation of the fitness landscape and quadratic
regression to estimate the best quadratic approximation. Here “best” means
minimum variance unbiased, as in the Gauss-Markov theorem. Their use of t
and F tests and confidence intervals requires the assumption that fitness
is conditionally homoscedastically normally distributed given phenotypic
trait values. This assumption is almost always grossly incorrect when one
uses a good surrogate for Darwinian fitness <cit.>.
Aster models <cit.> were designed to fix all of the problems of the
<cit.> approach and of all other approaches to life history analysis
<cit.>. The aster model is the state-of-the-art model for all
life history analyses in which the estimation of expected Darwinian fitness
is the primary goal. <cit.> show
various kinds of life history data for which aster models are necessary.
Assumptions for aster models are given in Section 2 below.
In this article we combine envelope methodology <cit.> with
aster models <cit.> to estimate the fitness landscape
<cit.> in life history analysis. The primary emphasis is that
this combination of methods estimates the fitness landscape with less
variability than is possible with aster models alone.
We first show how existing envelope estimators constructed from the 1D
algorithm <cit.> can reduce variability in estimation
of the fitness landscape. We then develop a new envelope estimator that
avoids the potential numerical pitfalls of the 1D algorithm.
Variance reduction is assessed using parametric bootstrap techniques
in <cit.>. These bootstrap algorithms account for
variability in model selection. Our methodology provides the most precise
estimation of expected Darwinian fitness to date.
Researchers using our methods can therefore draw stronger conclusions about
the driving forces of Darwinian fitness from their life history analyses.
In a life history analysis of M. guttatus flowers and a simulated
example, we show that our methodology leads to variance reduction in
estimation of expected Darwinian fitness when compared with analyses that
use aster models alone. We show that this variance reduction leads to sharper
scientific inferences about the potential causes of Darwinian fitness in the
M. guttatus life history analysis.
Our examples are fully reproducible, and the calculations necessary for their
reproduction are included in an accompanying technical report
<cit.>.
§ THE ASTER MODEL
Aster models are regular full exponential families. Parameters are
estimated by maximum likelihood. If Y is the response vector,
μ = E(Y) is the saturated model mean value parameter, and M
is the model matrix for an unconditional canonical affine submodel,
then the maximum likelihood estimate of μ satisfies
M^T μ̂ = M^T y <cit.>.
τ = M^T μ is the submodel mean value parameter
<cit.>. Likelihood ratio tests
for model comparison and confidence intervals for all parameters
are based on the usual asymptotics and Fisher information are provided
by R package <cit.>. In particular, for this
article we need to know that τ̂ = M^T y is is a minimum
variance unbiased estimator of the parameter it estimates and its
exact variance matrix is the Fisher information matrix for the
submodel canonical parameter β (the vector of regression
coefficients). That is, in the usual asymptotics of maximum likelihood
for this parameter the mean and variance are exact not approximate; only
the normal distribution is approximate.
The aster model is a directed acyclic graphical model
<cit.> in which the joint density is
a product of conditional densities that are specified by the arrows depicted
in the graph. Lines that appear in the graph specify nodes which are
dependent. For example, an organism may have multiple paths in their life
history and can only go down one of them <cit.>.
The aster model follows five assumptions which are:
A1 The graph of arrows is acyclic.
A2 In the graph of lines every connected component is a complete graph,
which is called a dependence group.
A3 Every node in a dependence group with more than one node has the same
predecessor (there is an arrow from the predecessor to each
node in the group). Every dependence group consisting of exactly
one node has at most one predecessor.
A4 The joint distribution is the product of conditional distributions,
one conditional distribution for each dependence group.
A5 Predecessor is sample size, meaning each conditional distribution
is the distribution of the sum of N independent and identically
distributed random vectors, where N is the value of the
predecessor, the sum of zero terms being zero.
A6 The conditional distributions are exponential families having the
components of the response vector for the dependence group as
their canonical statistics.
Assumptions A5 and A6 mean for an arrow y_k → y_j
that y_j is the sum of independent and identically distributed random
variables from the exponential family for the arrow and there are y_k
terms in the sum (the sum of zero terms is zero). These assumptions
imply that the joint distribution of the aster model is an exponential family
<cit.>. Three of these assumptions have a clear
biological meaning as well. Assumptions A1 through A3 restricts an individual
from revisiting life stages that have come to pass. Assumption A5 implies that
dead individuals remain dead and have no offspring though the course of the
study.
These aster models are saturated, having one parameter per component
of the response vector, and are not useful. Hence, as with linear and
generalized linear models, canonical affine submodels are used with change
of parameter φ = a + M β, where φ is the saturated model
parameter vector (linear predictor in the terminology of generalized linear
models), β is the submodel parameter (“coefficients” in R
terminology), a is the offset vector, and M is the model matrix.
The aster submodel has log likelihood
l(β) = ⟨ M^TY, β⟩ - c(a + Mβ)
where Y is the response vector and c(·) is the cumulant function of
the exponential family.
There are three parameters of interest that are present in the aster analyses
we consider (see <cit.> and <cit.> for more detail on aster model
parameterizations). These parameterizations are
1) the aster submodel canonical parameter vector β∈^p,
2) the aster submodel mean-value parameter vector τ∈^p,
3) the saturated aster model mean-value parameter vector μ∈^m, where
m is the number of individuals sampled multiplied by the number of nodes in
the aster graph.
These three parameterizations are all linked via invertible 1-1
transformations when the model matrix M is of full rank.
The usual asymptotics of maximum likelihood estimation give
√n(τ̂ - τ) d⟶
N(0, Σ),
where Σ = (M^T Y) is the Fisher information matrix associated with
the canonical parameter vector β. The maximum likelihood estimator of
β is asymptotically normal with variance given by Σ^-1.
From (<ref>) and the delta method we can obtain the
asymptotic distribution for any differentiable function of τ̂.
The asymptotic distribution for a differentiable function g of τ̂
is
√n{g(τ̂) - g(τ)}d⟶
N{0, ∇ g(τ)Σ∇ g(τ)^T}.
In particular, the asymptotic distribution of estimated expected Darwinian
fitness is of interest. Let h(μ) be expected Darwinian fitness.
Both β being a function of τ and μ = ∇ c(a + M β)
imply that
g(τ) = h[∇ c{a + M f(τ) }]
is expected Darwinian fitness as a function of τ and is differentiable
if h is differentiable.
The estimator g(τ̂) has asymptotic distribution given by
(<ref>). We have the potential to do better through the
incorporation of envelope methodology.
§ INCORPORATION OF ENVELOPE METHODOLOGY
Aster model estimates of expected Darwinian fitness may be too variable to
be useful, and in consequence we may not be able to statistically distinguish
estimates of expected Darwinian fitness over the
fitness landscape.
We address this problem through the incorporation of envelope
methodology into the aster modelling framework.
Envelope models were developed originally as a variance reduction tool for
the multivariate linear regression model. In this article we focus on envelope
methodology for general vector-valued parameter estimation <cit.>.
Envelope methodology has the potential to reduce the
variability of any √n-consistent asymptotically normal distributed
consistent estimator.
We now define the envelope subspace.
Let υ be a parameter of interest and suppose that υ̂ is
a √n-consistent estimator of υ with asymptotic covariance
matrix Σ_υ,υ. Let
= span(υ) ={aυ : a ∈}.
The envelope subspace ℰ_Σ_υ,υ() is
defined as the intersection of all reducing subspaces of
Σ_υ,υ that contain T (a reducing subspace is a sum of
eigenspaces if all eigenvalues of Σ have multiplicity one).
The envelope space satisfies both
⊂ℰ_Σ_υ,υ(),
Σ_υ,υ
= P_ℰΣ_υ,υP_ℰ
+ Q_ℰΣ_υ,υQ_ℰ;
where P_ℰ is the projection into the envelope subspace and
Q_ℰ is the projection into the orthogonal complement. In
coordinate form, these two envelope conditions are
⊂span(Γ),
Σ_υ,υ
= ΓΩΓ^T + Γ_oΩ_oΓ_o^T;
where (Γ,Γ_o) is a partitioned orthogonal matrix, the columns of
Γ are a basis for ℰ_Σ_υ,υ(), and
the dimensions of the positive definite matrices Ω and Ω_o are
such that the matrix multiplications are defined. The quantities
P_ℰΣ_υ,υP_ℰ and
ΓΩΓ^T are often referred to as 'material information'
in the envelope literature <cit.> since they represent the
portion of variability that is necessary for the task of estimating
υ. Similarly,
Q_ℰΣ_υ,υQ_ℰ
and Γ_oΩ_oΓ_o^T are referred to as 'immaterial information'
since they represent extraneous variability.
Intuitively, the envelope estimator reduces variability in estimation at no
cost to consistency. An illuminating depiction and explanation of how an
envelope analysis increases efficiency in multivariate linear regression
problems was given by <cit.>. The same intuition applies
to envelope methodology of <cit.>.
In applications, there is a cost to estimate
u = dim{ℰ_Σ_υ,υ()}
and Γ. With the basis matrix Γ estimated, we can then assess the
variance reduction of the envelope estimator through the parametric bootstrap.
The 1D algorithm <cit.> estimates a basis matrix
Γ for ℰ_Σ_υ,υ() at a
user-supplied envelope dimension u. The estimate of Γ is obtained by
providing Σ_υ,υ and
υ̂υ̂^T as inputs into the 1D algorithm. The
resulting estimator of Γ obtained from the 1D algorithm,
Γ_u, gives a √n consistent estimator
P_ℰ̂ of the projection onto the envelope subspace
P_ℰ <cit.>.
The 1D algorithm can be used to estimate u consistently
<cit.>.
Define = P_ℰ̂υ̂ to be the envelope
estimator of υ where υ are parameters that link the
estimation of expected Darwinian fitness to covariates of interest. We write
τ = (γ^T, υ^T)^T where γ are aster model
parameters not linking covariates to the estimation of Darwinian fitness.
The envelope estimator of τ is given as
= ([ γ; ]) = ([ I 0; 0 P_ℰ̂ ]) M^T Y = ^T Y,
= M([ I 0; 0 P_ℰ̂ ]).
The model matrix corresponds to the aster model that incorporates
the envelope structure.
The envelope estimator is a maximum likelihood estimator of τ for
the aster model with model matrix .
We have
l_env(β) = Y, β - c(β)
= ^TY, β - c(β).
and
∇_β l(β) = ^TY - ∇_β c(β).
Setting ∇_β l(β) = 0 and solving for β yields
∇_β c(β)|_β = β̂ = ^TY =.
This proposition justifies the use of the transformations
to switch between maximum likelihood estimators of the different
aster model parameterizations.
We compare envelope dimensions u by transforming envelope estimators of
τ to envelope estimators of β and then evaluate the
log likelihood at the envelope estimator of β.
The randomness inherent in is non-problematic in most applications.
This is because the 1D algorithm provides a √n-consistent envelope
estimator of υ even when u is estimated.
Inferences about aster model parameters implicitly assumes that n is large
enough for the asymptotic normality to be a good approximation for the
distribution of maximum likelihood estimators.
We then compute an envelope estimator of expected Darwinian fitness
g(τ) using the aster model with model matrix .
Note that the model matrix is not of full column rank.
Therefore the transformations
used to switch between aster model parameterizations
are not 1-1. In particular, many distinct estimates of β map to
. Each of these distinct estimated values of β maps to the
same estimate of β, which in turn maps to a
common estimate of expected Darwinian fitness.
The loss of 1-1 transformations is not an issue in this case.
Our estimator of estimated expected Darwinian fitness
is given by <cit.>, with g(·) replacing
t(·). Steps for this algorithm are given in Algorithm 1.
When the top-level of our bootstrap procedure (Steps 1 through 4 in
Algorithm 1) has run for B iterations, we obtain the envelope estimator
ĝ_1D
= 1/B∑_b=1^B g{^(b)}.
The envelope estimator ĝ_1D implicitly behaves
as a weighted average with the weights reflecting the likelihood of observing
a particular estimated value of u, , and . The value of u is
estimated with either the Bayesian information criterion or Akaike information
criterion at every iteration of the parametric bootstrap. The intuition is
that the averaging in ĝ_1D will smooth out variability due
to the estimating u with our chosen model selection criterion.
Variability of ĝ_1D is estimated using the double
bootstrap technique in <cit.>, with steps shown below in
Algorithm 1. This bootstrap technique
accounts for all estimation error, including model selection error.
The reason it accounts for all estimation error, is that all estimation,
including model selection is done in each iteration of the bootstrap
(nothing estimated is ever treated as known in bootstrap iterations).
Algorithm 1. Parametric bootstrap for assessing the variability of
the envelope estimator ĝ_1D:
1. Fit the aster model to the data and obtain υ̂
and Σ_υ,υ from the aster model fit.
2. Choose a model selection criterion. Compute the envelope
estimator of υ in the original sample, given as
= P_ℰ̂υ̂ where
P_ℰ̂ is obtained from the 1D algorithm and the chosen
model selection criterion.
3. Perform a parametric bootstrap by generating samples from the
distribution of the aster submodel evaluated at
= (γ̂^T,^T)^T. For b=1, …, B
of the procedure:
(3a) Compute τ̂^(b) and
Σ_υ,υ^(b) from the aster model fit to
the resampled data.
(3b) Compute P_ℰ̂^(b) as done in Step 2.
(3c) Compute
^(b) = {γ̂^(b)^T,^(b)^T}^T
and g{^(b)}.
4. The bootstrap estimator of expected Darwinian
fitness is the average of the envelope estimators computed in Step 3c.
This completes the first part of the bootstrap procedure.
5.
At k = 1, …, K, for each b=1, …, B we:
(5a) Generate data from the distribution of the aster submodel
evaluated at ^(b).
(5b) Perform Steps 3a through 3c with respect to the dataset
obtained in Step 5a to calculate both
^(b)^(k) and g{^(b)^(k)}.
6. Compute both ĝ_1D and the standard deviation
in <cit.>.
§ A DIRECT ENVELOPE ESTIMATOR USING REDUCING SUBSPACES
We propose a new way of constructing envelope estimators provided that the
eigenvalues of Σ_υ,υ have multiplicity one. In this
section, envelope estimators are constructed directly from the reducing
subspaces of Σ_υ,υ.
Uniqueness of the eigenvalues of Σ_υ, υ
implies that its reducing subspaces are sums of its eigenspaces.
Let be a reducing subspace of Σ_υ,υ.
Define Γ_ and P_ as the basis matrix for
and the projection onto respectively.
Let be the estimator of the reducing subspace ,
obtained by searching over the 1 dimensional eigenspaces of
Σ_υ,υ.
In applications, eigenvalues of Σ_υ,υ
are almost always unique. Define Γ_ and
P_ =
Γ_Γ^T_
as estimators of Γ_, and P_ respectively. The basis matrix
Γ_ is constructed from the eigenvectors of
Σ_υ,υ.
We now define the envelope estimator of υ constructed from
reducing subspaces.
The envelope estimator of υ constructed from the reducing
subspaces is defined to be
= P_υ̂.
The reducing subspaces of Σ_υ,υ are
√n consistent estimators of the reducing subspaces of
Σ_υ,υ. Therefore Γ_,
P_, and the corresponding estimator
P_υ̂ are √n consistent
estimators of Γ_, P_, and υ respectively.
The envelope estimator of τ constructed by reducing subspaces is given by
= ([ γ; ]) = ([ I 0; 0 P_ ]) M^T Y = ^T Y,
= M([ I 0; 0 P_ ]).
The model matrix corresponds to the aster model that incorporates
the envelope structure obtained from the reducing subspaces of
Σ_υ,υ. We have a similar result as
Proposition <ref> for the aster model with model matrix .
The envelope estimator is a maximum likelihood estimator of
τ for the aster model with model matrix .
The proof of Proposition <ref> follows the same steps as the proof
for Proposition <ref>.
There is a close connection between envelope estimation using reducing
subspaces and envelope estimation using the 1D algorithm. In the population,
=.
The connection between both estimation methods exists in finite samples as
seen in Theorem <ref>. In preparation, define orthogonal matrices
O_u = (Γ_u,Γ_uo),
O_
= (Γ_,
Γ_o),
and O = O_O_u^T.
The matrices Γ_,
Γ_ o, Γ_u, and Γ_uo
converges in probability to Γ_, Γ_ o, Γ_u,
and Γ_uo respectively. Now
Γ_^TΓ_u and Γ_ o^TΓ_uo are both 0-1
valued rotation matrices and
Γ_^TΓ_uo = Γ_ o^TΓ_u = 0. These facts
imply that O converges in probability to a 0-1 valued rotation
matrix. We will assume that Op→ I without loss
of generality.
The basis matrix Γ_ is the output of the 1D
algorithm with inputs
M = OΣ_υ,υO^T
and
U = Oυ̂υ̂^TO^T
at dimension u.
Let M_2 = Σ_υ,υ and
U_2 = υ̂υ̂^T.
Similar to the proof of <cit.>, let
Q_n(g) = -n/2log(g^TOM_2O^Tg)
- n/2log[g^T{O(M_2 + U_2)O^T}^-1g]
+ n log(g^T g)
= -n/2log(g^TOM_2O^Tg)
- n/2log{g^TO(M_2 + U_2)^-1O^Tg}
+ n log(g^TOO^T g).
Now let v̂_k, k = 1, …, u be the kth column of
Γ_ and define A ∈^p× p to be a matrix of
0's with 1's occupying the first k diagonal entries. Then
Q_n(v̂_k)
= -n/2log(v̂_k^TOM_2O^Tv̂_k)
- n/2log{v̂_k^TO(M_2
+ U_2)^-1O^Tv̂_k}
+ n log(v̂_k^TO^TOv̂_k)
= - n/2log(A_k O_uM_2O_u^TA_k)
- n/2log{A_kO_u(M_2
+ U_2)^-1O_u^TA_k}
+ n log(A_kO_u^TO_u A_k)
= - n/2log(ĝ_uk^TM_2ĝ_uk)
- n/2log{ĝ_uk^T(M_2
+ U_2)^-1ĝ_uk}
+ n log(ĝ_uk^Tĝ_uk)
where ĝ_uk is the kth column of Γ_u,
the output of the 1D algorithm with M_2
and U_2 as inputs. Therefore v̂_k is a maximizer
of Q_n(g) and this completes the proof.
Theorem <ref> in combination with <cit.>
allows for us to estimate consistently.
Thus the variability associated with the estimation of and
decreases as n→∞. However in practical applications correct
model selection cannot be guaranteed.
Therefore the envelope estimator of expected Darwinian fitness
g() has an extra source of variability due to model selection
uncertainty.
We develop a double bootstrap procedure with steps similar to those in
Algorithm 1 to account for variability in model selection.
The first level of the bootstrap procedure provides the
estimator of expected Darwinian fitness,
ĝ_env
= 1/B∑_b=1^B g{^(b)}.
The same model selection criteria is used to select the reducing subspace used
to construct ^(b) at every iteration b = 1, …, B.
The second of level of this bootstrap procedure estimates the variability
of (<ref>). The steps for this algorithm are provided in <cit.>.
The utility of our double bootstrap procedure is shown in Section 5.1 where, in
that example, there is considerable disagreement between model selection
criteria.
When k is small, is preferable to .
At any iteration of the 1D algorithm, minimizers of the objective function
stated in <cit.> are pulled towards reducing subspaces of
Σ_υ,υ. This objective function is non-convex
and contains potentially many local minima. The optimizations conducted within
the 1D algorithm are sensitive to starting values and can get stuck at these
local minima. This undermines the 1D algorithm since it is required that users
find global minima for its justification. Unlike the 1D algorithm, the
reducing subspace approach does not involve any optimization routines.
However when k is moderately large, the computation
of all of candidate envelope estimators at each reducing subspace is too
computationally intensive, there are 2^k - 1 possible reducing subspaces in
non trivial problems. The 1D algorithm may still be fast when k is
moderately large, only k-1 optimizations in non trivial problems.
In most aster applications k is small since data is obtained through
expensive collection methods and fitness landscapes are
low-dimensional <cit.>.
§ EXAMPLES
§.§ Simulated Data
A population of 3000 organisms was simulated to form the dataset used in this
aster analysis. These data were generated according to the graphical
structure appearing in panel A of Figure <ref>. There are two
covariates (z_1,z_2) associated with Darwinian fitness and the aster model
selected by the likelihood ratio test is a full quadratic model with respect
to these covariates.
We partition τ into (γ^T, υ^T)^T where γ∈^4
are nuisance parameters and υ∈^5 are relevant to the estimation
of expected Darwinian fitness. Here, υ∈^5 because our model is
full quadratic in z_1 and z_2. In this example, the true reducing subspace
is the space spanned by the first and fourth eigenvectors of the covariance
matrix of the parameters of interest estimated from the original data.
We begin by considering envelope estimators constructed using the 1D algorithm.
The Akaike information criterion and the Bayesian information criterion
both select u = 5.
Envelope methods are not interesting in this case.
We now consider envelope estimators constructed from reducing subspaces.
In the original sample,
the Bayesian information criterion
selects the reducing subspace that is the sum of the first, fourth,
and fifth eigenspaces of Σ_υ,υ numbered in
order of decreasing eigenvalues.
This suggests that the dimension of the envelope space is u = 3. The 1D
algorithm and the reducing subspace approach are in disagreement, consistency
of model selection is not helpful in this application. We turn to the double
parametric bootstrap to estimate and the asymptotic variability of
. The Bayesian information criterion is used to select at every
iteration of the first level of the bootstrap.
The results are seen in Table <ref>.
Table <ref> shows seven individuals that had high values
of estimated expected Darwinian fitness. Each individual has a unique set of
traits. The first two columns display the envelope estimator of
expected Darwinian fitness and its bootstrapped standard error.
The maximum likelihood estimator of expected Darwinian fitness and its
bootstrapped standard error are displayed in the third and fourth columns
respectively. The ratios of bootstrapped standard errors for
ĝ_MLE to ĝ_env are displayed in the final
column. We can see that all of the ratios are greater than 1 which indicates
that the envelope estimator of expected Darwinian fitness is less variable
than the maximum likelihood estimator.
Contour plots of the ratios of estimated standard errors are displayed in the
technical report <cit.>. These contour plots show that the envelope
estimator of expected Darwinian fitness is less variable than the maximum
likelihood estimator for the majority of the observed data. The region where
the envelope estimator is less variable includes the values of z_1 and z_2
that maximize estimated expected Darwinian fitness. Variance reduction is also
obtained when we use the reducing subspace suggested by the Akaike information
criterion. This is also shown in <cit.>.
§.§ M. guttatus aster analysis
The yellow monkeyflower M. guttatus has been and is currently a well
studied flower <cit.>.
M. guttatus is a species which comprises many morphologically variable
populations growing in moist places such as stream banks, meadows and springs
over a range that extends from the Aleutian Islands to Mexico and from the
California coast to the Rocky Mountains <cit.>. The lifecycle of the
individual M. guttatus flowers, for our life history analysis, is
depicted in panel B of Figure <ref>.
<cit.> performed a life history analysis of M. guttatus using
aster models. One of their interests was to determine which levels of
genetic background, field site, inversion orientation, and ecotype of the
flower are associated with high Darwinian fitness. We show that the set of
candidate trait values thought to maximize expected Darwinian fitness is
smaller when envelope methodology is incorporated.
<cit.> collected measurements on 2313 M. guttatus.
We fit a linear fitness landscape to this data.
The parameters υ∈^6 are relevant to the estimation of expected
Darwinian fitness. In the original sample, the Bayesian information criterion
leads to a selection of a reducing subspace that is the sum of all eigenspaces
of Σ_υ,υ with the exception of the fourth
and fifth eigenspaces. The parametric double bootstrap procedure outlined in
Sections 3 and 4 is used to estimate the variability of where the
Bayesian information criterion is used to select at every iteration of
the first level of the bootstrap.
Table <ref> shows the results for seven individuals that have
high values of estimated expected Darwinian fitness through maximum
likelihood and envelope estimation. Each individual has a unique set of
traits. We see that both methods agree on the trait values that are expected
to maximize expected Darwinian fitness.
We also see that all of the ratios are greater than 1.
More importantly, this variance reduction implies more precise inference
in this life history analysis. For example, the envelope estimator can
statistically distinguish (α = 0.05, unadjusted for multiple
comparisons) the second row of Table <ref> from the
fifth row of Table <ref>.
The combination of envelope methodology into the aster model framework
allowed for us to consider a smaller set of traits associated with high
Darwinian fitness.
§ SOFTWARE
This paper is accompanied by an R package
<cit.>, which requires the two R packages for aster models:
<cit.> and
<cit.>, and also a technical report <cit.> that
reproduces the examples in this paper and shows how functions in the
package are used.
§ DISCUSSION
One could think to perform
envelope methodology with respect to the regression coefficients β
instead of τ. However β is not well-defined, one can shift β
with an arbitrarily chosen offset vector without changing the value of the
mean-value parameters τ and μ. In addition, whenever we have
categorical predictors, R software automatically drops one category when it
has an intercept in the formula, but which category it drops is arbitrary and
changing which is dropped changes β. Envelope methodology is not invariant
to this form of arbitrary shifting.
To the best of our knowledge, the exponential family regression applications
in the envelope model literature exclusively seek inference about β
because this well-definedness issue is not a problem in those applications
<cit.>.
The applications of envelope methods to aster models is therefore outside of
the scope of previous applications to exponential family models. Additionally,
the methods in this paper can be extended to functions of parameters in
generalized linear regression models. Aster model are a generalization
of generalized linear regression models <cit.>.
The consequences of potential model selection errors served as the motivation
for the implementation of the bootstrap procedure in <cit.>.
In that article, inferences are only given for
a canonical parameter vector in a multivariate linear regression model.
We applied the <cit.> bootstrap procedures to alleviate
possible model selection concerns. However, this particular choice of a
bootstrap procedure is not without flaws. <cit.> mentions that Efron
does not derive the asymptotic distribution of the final estimator.
The literature has not reached a consensus
on the appropriate bootstrap procedure to be implemented when bootstrapping
depends on data-driven model selection.
As the literature currently stands, <cit.> provides a reasonable
solution to the problem of potential model selection errors in the application
of envelope methodology to aster models. The parametric bootstrap does not
rely on asymptotic normality since it simulates the exact sampling
distribution of the estimator for some parameter value, and the double
bootstrap simulates the exact sampling distribution of the estimator for a
long list of parameter values <cit.>.
Our new envelope estimator does not involve any non-convex optimization
routines that are both sensitive to starting values and have potential
problems with local minima. These computational problems can be detrimental
to the performance of the 1D algorithm. The underlying theory of the 1D
algorithm justifies the consistency properties of our new envelope estimator.
In envelope modelling problems with a small number of
parameters of interest the envelope estimator constructed directly from
reducing subspaces is preferred since it possesses the same strengths as
the 1D algorithm without its potential numerical pitfalls. However our
estimator is currently expensive to compute in moderate p problems.
In aster analyses p is typically small.
In many life history analyses, specific trait values which are estimated to
produce the highest expected Darwinian fitness are of interest. It is common
practice to only report such trait values <cit.>. Such reporting
ignores the variability associated with the estimation of expected Darwinian
fitness. There are likely many trait values having estimated expected
Darwinian fitness that is statistically indistinguishable from the reported
values. Our methodology addresses this concern directly. The potential set of
candidate traits associated with high values of expected Darwinian fitness is
smaller when the combination of envelope methodology into the aster modelling
framework is utilized as seen in <cit.>.
Researchers using our methods will have the potential to make stronger
inferences about expected Darwinian fitness through our variance reduction
techniques.
§ ACKNOWLEDGEMENTS
Daniel J. Eck's research is supported by NIH/NIHCD grant 1DP2HD091799-01.
We would like to thank David B. Lowry for providing the dataset used in
Example 2, Xin Zhang for the code that implements the 1D algorithm, and
Forrest W. Crawford for helpful discussion that led to the strengthening of
this paper. We would also like to especially thank Amber Eule-Nashoba for
helpful comments on the technical report.
chicago
|
http://arxiv.org/abs/1701.07461v5 | 20170125194703 | Lower bounds on the quantum Fisher information based on the variance and various types of entropies | [
"Geza Toth"
] | quant-ph | [
"quant-ph"
] |
bibdataout@aps
bibdataout
@CONTROL
apsrev41Control,author="08",editor="1",pages="0",title="0",year="1"
@filesw
auxoutapsrev41Control
|
http://arxiv.org/abs/1701.07947v2 | 20170127051753 | Variation of canonical height and equidistribution | [
"Laura DeMarco",
"Niki Myrto Mavraki"
] | math.NT | [
"math.NT",
"math.DS"
] |
Let π : E→ B be an elliptic surface defined over a number field K, where B is a smooth projective curve, and let P: B → E be a section defined over K with canonical height ĥ_E(P)≠0. In this article, we show that the function t ↦ĥ_E_t(P_t) on B() is the height induced from an adelically metrized line bundle with non-negative curvature on B. Applying theorems of Thuillier and Yuan, we obtain the equidistribution of points t ∈ B() where P_t is torsion, and we give an explicit description of the limiting distribution on B(). Finally, combined with results of Masser and Zannier, we show there is a positive lower bound on the height ĥ_A_t(P_t), after excluding finitely many points t ∈ B, for any “non-special" section P of a family of abelian varieties A → B that split as a product of elliptic curves.
Variation of canonical height and equidistribution
Laura De Marco and Niki Myrto Mavraki
December 30, 2023
==================================================
§ INTRODUCTION
Suppose E → B is an elliptic surface defined over a number field K, so B is a smooth projective curve and all but finitely many fibers E_t, t∈ B(K), are smooth elliptic curves. We let ĥ_E denote the Néron-Tate canonical height of E viewed as an elliptic curve over the function field k = K(B); we let ĥ_E_t denote the canonical height on the fibers for (all but finitely many) t∈ B().
Suppose that P → E is a section defined over K for which ĥ_E(P) ≠ 0, so, in particular, the points P_t on the fiber are not torsion in E_t for all t. Tate showed that the function
t ↦ĥ_E_t(P_t)
is a Weil height on B(), up to a bounded error <cit.>. More precisely, there exists a divisor D_P ∈Pic(B) ⊗ of degree equal to ĥ_E(P) so that
ĥ_E_t(P_t) = h_D_P(t) + O(1),
where h_D_P is a Weil height on B() associated to D_P. In a series of three articles <cit.>, Silverman refined statement (<ref>) by analyzing the Néron decomposition of the canonical height on the fibers
ĥ_E_t(P_t) = ∑_v ∈ M_K n_v λ̂_E_t, v(P_t)
where M_K denotes the set of places of the number field K, and n_v are the integers appearing in the product formula ∏_v∈ M_K |x|_v^n_v = 1 for all x ∈ K^*.
In this article, we explain how Silverman's conclusions about the local functions λ̂_E_t, v(P_t) are precisely the input needed to show that t↦ĥ_E_t(P_t) is a “good" height function on the base curve B, from the point of view of equidistribution. Combining his work with methods from complex dynamics, as in <cit.>, and the inequalities of Zhang on successive minima <cit.>, we prove:
Let K be a number field and k= K(B) for a smooth projective curve B defined over K. Fix any elliptic surface E → B defined over K and point P ∈ E(k) satisfying ĥ_E(P) ≠0. Then
h_P(t) := ĥ_E_t(P_t),
for t with smooth fibers, is the restriction of a height function on B() induced from an adelically metrized ample line bundle , with continuous metrics of non-negative curvature, satisfying
h_P(B) := c_1(ℒ)^2/(2 c_1(ℒ)) = 0.
Theorem <ref> implies that our height function on B satisfies the hypotheses of the equidistribution theorems of Thuillier and Yuan for points of small height on curves <cit.>, and we deduce the following:
Let K be a number field and k= K(B) for a smooth projective curve B defined over K. Fix any elliptic surface E → B defined over K and point P ∈ E(k) satisfying ĥ_E(P) ≠0. There is a collection of probability measures μ_P = {μ_P,v: v ∈ M_K} on the Berkovich analytifications B^an_v such that for any infinite, non-repeating sequence of t_n ∈ B() such that
ĥ_E_t_n(P_t_n) → 0
as n→∞, the discrete measures
1/|(K/K) · t_n|∑_t ∈(K/K) · t_nδ_t
converge weakly on B^an_v to the measure μ_P,v at each place v of K.
The measures μ_P,v of Corollary <ref> are not difficult to describe, at least at the archimedean places. At each archimedean place v, there is a canonical positive (1,1)-current T_v on the surface E() (with continuous potentials away from the singular fibers) which restricts to the Haar measure on each smooth fiber E_t(). The measure μ_P,v on B() is just the pull-back of this current by the section P. Moreover, at every place, the measure μ_P,v is the Laplacian of the local height function λ̂_E_t,v(P_t), away from its singularities. We give more details about (and a dynamical perpective on) the construction of the current T_v in Section <ref>.
As a consequence of Theorem <ref>, and combined with the work of Masser and Zannier <cit.>, we obtain the so-called Bogomolov extension of their theorems. Fix integer m≥ 2, and suppose that E_i → B is an elliptic surface over a curve B, defined over , for i = 1, …, m. We consider sections P of the fiber product A = E_1 ×_B ⋯×_B E_m defined over . We say that a section P = (P_1, P_2, …, P_m) is special if
* for each i = 1, …, m, either P_i is torsion on E_i or ĥ_E_i(P_i)≠0; and
* for any pair i,j ∈{1, …, m} such that neither P_i nor P_j is torsion, there are an isogeny ϕ: E_i→ E_j and nonzero group endomorphisms a, b of E_j so that a∘ϕ(P_i) = b(P_j).
If a family of abelian surfaces A → B is isogenous to a fiber product (after performing a base change B' → B if needed), we say that a section of A is special if it is special on the fiber product.
It is well known that a special section will always pass through infinitely many torsion points in the fibers A_t = E_1,t×⋯× E_m,t. That is, there are infinitely many t ∈ B() for which
ĥ_E_1,t(P_1(t)) = ⋯ = ĥ_E_m,t(P_2(t)) = 0.
For a proof see <cit.> or, for dynamical proofs, see <cit.>.
The converse statement is also true, but it is much more difficult: Masser and Zannier proved that if ĥ_E_1,t(P_1(t)) = ⋯ = ĥ_E_m,t(P_2(t)) = 0 for infinitely many t∈ B(), then the section P must be special <cit.>. We extend these results of Masser-Zannier from points of height 0 to points of small height:
Let B be a quasiprojective smooth algebraic curve defined over . Suppose A → B is a family of abelian varieties of relative dimension m ≥ 2 defined over which is isogeneous to a fibered product of m≥ 2 elliptic surfaces. Let be a line bundle on A which restricts to an ample and symmetric line bundle on each fiber A_t, and let ĥ_t be the induced Néron-Tate canonical height on A_t, for each t∈ B(). For each non-special section P: B → A defined over , there is a constant c = c(, P) > 0 so that
{t ∈ B(): ĥ_t(P_t) < c}
is finite.
If A→ B is isotrivial, then Theorem <ref> is a special case of the Bogomolov Conjecture, proved by Ullmo and Zhang <cit.>.
A key ingredient in their proofs is the equidistribution theorem of Szpiro, Ullmo, and Zhang <cit.>. In his 1998 ICM lecture notes <cit.>, Zhang presented a conjecture about geometrically simple families of abelian varieties, which stated, in its most basic form:
Let B be a quasiprojective smooth algebraic curve defined over . Suppose A → B is a non-isotrivial family of abelian varieties with fiber dimension > 1, defined over with a simple generic fiber. Let be a line bundle on A which restricts to an ample and symmetric line bundle on each fiber A_t, and let ĥ_t be the induced Néron-Tate canonical height on A_t, for each t∈ B(). For each non-torsion section P: B → A defined over , there is a constant c = c(, P) > 0 so that
{t ∈ B(): ĥ_t(P_t) < c}
is finite.
When the dimension of the fibers A_t is equal to 2, the finiteness of {t ∈ B(): ĥ_t(P_t) = 0} for sections as in Conjecture <ref> was established recently by Masser and Zannier in <cit.>. It is well known that the conclusion of Conjecture <ref> can fail to hold if A is not simple and certainly fails if it is a family of elliptic curves, as mentioned above. However, the results of Masser and Zannier in their earlier work <cit.> suggested a formulation of Zhang's conjecture for the non-simple case when A splits as a product of elliptic curves; this is what we proved in our Theorem <ref>.
Theorem <ref>, Corollary <ref>, and Theorem <ref> were obtained in the special case of the Legendre family E_t = {y^2 = x(x-1)(x-t)} over B= ^1 and the abelian variety A_t = E_t× E_t, for sections P with x-coordinates in (t) in <cit.>, using methods from complex dynamical systems, without appealing to Silverman and Tate's results on the height function. Moreover, restricting further to sections P with constant x-coordinate (in ^1(ℚ)), Theorem <ref> was obtained without relying on the theorems of Masser and Zannier and gave an alternate proof of their result. This includes the special case treated by Masser and Zannier in their article <cit.>. For sections with constant x-coordinate, the hypothesis on P (that ĥ_E(P) ≠0) is equivalent to asking that x(P) ≠ 0,1,∞ <cit.>.
Comments and acknowledgements.
This project was motivated, in part, by experiments to visualize Silverman's results on the variation of canonical height <cit.> in terms of the measures μ_P,v at archimedean places, and to examine their dependence on P. In particular, the measure detects the failure of the local height function λ̂_E_t,v(P_t) to be harmonic; compare the comments on non-analyticity preceding Theorem I.0.3 of <cit.>. The images appearing in Section <ref> were first presented at the conference in honor of Silverman's birthday, August 2015.
We thank Charles Favre, Dragos Ghioca, Robert Rumely, Joseph Silverman, and Amaury Thuillier for helpful suggestions. Our research was supported by the National Science Foundation and the Simons Foundation.
§ SILVERMAN'S WORK
§.§ Preliminaries
Let ℱ be a product formula field of characteristic 0, so there exists a family M_ℱ of non-trivial absolute values on ℱ and a collection of positive integers n_v for v ∈ M_ℱ so that
∏_v ∈ M_ℱ |x|_v^n_v = 1
for all x∈ℱ^*. Let E/ℱ be an elliptic curve with origin O, expressed in Weierstrass form as
E = { y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6}
with discriminant Δ. Denote by
ĥ_E:E(ℱ)→[0,∞)
the Néron-Tate canonical height function; it can be defined by
ĥ_E(P) = 1/2lim_n→∞h(x([n]P))/n^2
where h is the naive Weil height on ^1 and x: E →^1 is the degree 2 projection to the x-coordinate.
For each v∈ M_ℱ, we let _v denote a minimal, algebraically closed field containing ℱ which is complete with respect to |·|_v. For each v, we fix an embedding of ℱ into _v. The canonical height has a decomposition into local heights, as
ĥ_E(P)= 1/|( ℱ/ℱ)· P| ∑_Q ∈( ℱ/ℱ)· P ∑_v∈ M_ℱ n_v λ̂_E,v(Q)
for P∈ E(ℱ)∖{O}, with the local heights λ̂_E,v characterized by three properties <cit.>:
* λ̂_E,v is continuous on E(_v)∖{O} and bounded on the complement of any v-adic neighborhood of O;
* the limit of λ̂_E,v(P) - 1/2log|x(P)|_v exists as P → O in E(_v); and
* for all P = (x,y) ∈ E(_v) with [2] P ≠ O,
λ̂([2]P) = 4 λ̂(P) - log|2y+a_1x+a_3|_v + 1/4log|Δ|_v.
§.§ Variation of canonical height: the set up
Now let K be a number field and E → B an elliptic surface defined over a number field K with zero section O: B → E. Let P:B→ E be a non-zero section defined over K, and assume that
ĥ_E(P) ≠0
when viewing P as a point on the elliptic curve E defined over k = (B). For each t∈ B(K) such that the fiber E_t is non-singular, we have point P_t∈ E_t(K). We will investigate the function
t ↦ĥ_E_t(P_t)
which is well defined at all but finitely many t ∈ B(K). Furthermore, via the embedding of into _v for each place v∈ M_K, we may view E→ B as defined over _v and consider the Néron local heights λ̂_E_t, v(P_t) on the non-singular fibers E_t as functions of t ∈ B(_v).
Let D_E(P) be the -divisor on B defined by
D_E(P) = ∑_γ∈ B()λ̂_E,_γ (P) · (γ).
Here, λ̂_E,_γ (P) is the local canonical height of the point P on the elliptic curve E over k = (B) at the place ord_γ, for each γ∈ B(). The degree of D_E(P) is equal to ĥ_E(P). It follows from the definitions that D_E(P) is a subset of the finite set
{t ∈ B(): E_t }∪{t ∈ B(): P_t = O_t}.
By enlarging K, we may assume that the support of D_E(P) is contained in B(K).
That D_E(P) is a -divisor is standard, following from the fact that the numbers λ̂_E,_γ (P) can be viewed as arithmetic intersection numbers on a Néron local model. See <cit.> for a proof that ĥ_E(P)∈; see <cit.> and <cit.> for proofs that each local function λ̂_E,v also takes values in ; see <cit.> for a dynamical proof.
§.§ Variation of canonical height: quasi triviality
Let h_D_E(P) be an analytic Weil height on B() as defined in <cit.>. That is, we let g be the genus of B, and for each point γ∈ B(K), we choose an element ξ_γ of K(B) which has a pole of order 2g+1 at γ and no other poles. For each non-archimedean place v of K, set
λ_D_E(P), v(t) = 1/2g+1 ∑_γ∈ B(K)λ̂_E,_γ (P) log^+|ξ_γ(t)|_v
for all t ∈ B(_v) ∖ D_E(P). For archimedean places v, the local height is defined by
λ_D_E(P), v(t) = 1/2(2g+1) ∑_γ∈ B(K)λ̂_E,_γ (P) log(1 + |ξ_γ(t)|_v^2).
We set
h_D_E(P)(t) = 1/|(/K)· t| ∑_s ∈(/K)· t ∑_v∈ M_Kλ_D_E(P), v(s)
for all t ∈ B(). For fixed choices of ξ_γ, we will call the associated height h_D_E(P) our “reference height" for the divisor D_E(P). Silverman proved:
<cit.>
For any choice of reference height h_D_E(P), there is a finite set S of places so that
λ̂_E_t, v(P_t) = λ_D_E(P), v(t)
for all t ∈ B() ∖ D_E(P) and all v ∈ M_K ∖ S.
§.§ Variation of canonical height: continuity
Fix a point t_0∈ B() and a uniformizer u ∈(B) for t_0, and consider the function
V_P, t_0, v(t):=λ̂_E_t,v(P_t) + λ̂_E,_t_0 (P) log|u(t)|_v,
which is not a priori defined at t_0. Theorem <ref> implies that
V_P,t_0,v≡ 0
for all but finitely many places v in a v-adic neighborhood of each t_0.
Silverman also proved the following:
<cit.>
Fix t_0 ∈ B() and a uniformizer u at t_0. For all v ∈ M_K, there exists a neighborhood U ⊂ B(_v) containing t_0 so that the function V_P, t_0,v of (<ref>) extends to a continuous function on U.
§ A DYNAMICAL PERSPECTIVE
Recall that the Néron-Tate height ĥ_E and its local counterparts λ̂_E,v can be defined dynamically. Letting E be an elliptic curve defined over a number field K, the multiplication-by-2 endomorphism ϕ on E descends to a rational function of degree 4 on ^1, via the standard quotient identifying a point P with its additive inverse:
E [d]_π[r]^ϕ E[d]^π
^1 [r]^f_ϕ ^1
An elementary, but key, observation is that a point is torsion on E if and only if its quotient in ^1 is preperiodic for f_ϕ. The height ĥ_E on E() satisfies
ĥ_E (P) = lim_n→∞1/4^n h(f_ϕ^n(π P))
where h is the standard logarithmic Weil height on ^1(). Now let E→ B be an elliptic surface defined over a number field K, and let P: B→ E be a section, also defined over K. In this section, we use this perspective to give a proof of subharmonicity of the local height functions t↦λ̂_E_t,v(P_t) and the extensions V_P, t_0, v of (<ref>). We will present this fact as an immediate consequence of now-standard complex-dynamical convergence arguments, at least when the fiber E_t is smooth and the local height λ̂_E_t,v(P_t) is finite. Near singular fibers, we utilize the maximum principle and standard results on removable singularities for subharmonic functions. The same reasoning applies in both archimedean and non-archimedean settings.
In <ref> we provide the background to justify the explicit description of the limiting distribution μ_P,v at the archimedean places v of K, as mentioned in Remark <ref>.
§.§ Canonical height and escape rates
As in <ref>, we let E be an elliptic curve in Weierstrass form, defined over a product-formula field ℱ of characteristic 0. We define a rational function f = ϕ/ψ on ^1 by the formula
f(x(P))=x([2]P)
for all P ∈ E. Here x(P) is the x-coordinate for a point P∈ E; this function x plays the role of π in (<ref>). In coordinates, we have ϕ(x)=x^4-b_4x^2-2b_6x-b_8 and ψ(x)=4x^3+b_2x^2+2b_4x+b_6=(2y+a_1x+a_3)^2 for P = (x,y).
By a lift of f, we mean any homogeneous polynomial map F on ^2, defined over , so that τ∘ F = f ∘τ, where τ: ^2∖{(0,0)}→^1 is the tautological projection. A lift of a point x ∈^1 is a choice of X ∈^2∖{(0,0)} so that τ(X) = x.
The standard lift of f will be the map F:𝔸^2→𝔸^2 defined by
F(z,w) = (w^4ϕ(z/w), w^4ψ(z/w)).
For each v ∈ℳ_, the v-adic escape rate is defined by
_F,v(z,w)=lim_n→∞log||F^n(z,w)||_v/4^n
where
(z,w)_v = max{|z|_v, |w|_v}.
Any other lift of f is of the form cF for some c ∈^*; observe that
_cF,v = _F,v + 1/3log|c|_v.
Note that
_F,v(α x, α y) = _F,v(x,y) + log |α|_v
for any choice of lift F. Furthermore, _F,v is continuous on (_v)^2∖{(0,0)}, as proved in the archimedean case by <cit.>. For non-archimedean absolute values v, _F,v extends continuously to the product of Berkovich affine lines ^1, an_v ×^1,an_v ∖{(0,0)} <cit.>.
For the standard lift F of f, and for each place v of , the local canonical height function satisfies
λ̂_E,v(P) = 1/2_F,v(x,y) - 1/2log|y|_v - 1/12log|Δ|_v
where x(P) = (x:y).
The proof is immediate from the properties of _F,v by checking the three characterizing conditions for λ̂_E,v.
§.§ Variation of canonical height: subharmonicity
Now let K be a number field and E → B an elliptic surface defined over a number field K with zero section O: B → E. Let k = (B); viewing E as an elliptic curve defined over k, we also fix a point P ∈ E(k). Recall the function V_P, t_0, v(t) defined in (<ref>).
For every t_0 ∈ B() and uniformizer u in k at t_0, the function
V_P, t_0, v(t):=λ̂_E_t,v(P_t) + λ̂_E,_t_0 (P) log|u(t)|_v,
extends to a continuous and subharmonic function on a neighborhood of t_0 in the Berkovich analytification B_v^an.
The continuity was already established in Theorem <ref>, though it was not explicitly stated for the Berkovich space. The argument below takes care of that. We begin with a lemma.
Fix α∈ k^* and t_0 ∈ B(). Let u ∈ k be a uniformizer at t_0. For each place v of K, the function
t↦log|α_t|_v - (_t_0α) log|u(t)|_v
is harmonic in a neighborhood of t_0 in the Berkovich analytification B_v^an.
This is Silverman's <cit.> plus a removable singularities lemma for harmonic functions. See also <cit.> for the extension of a harmonic function to a disk in the Berkovich space B_v^an.
Fix P ∈ E(k). Let F and X be lifts of f and x(P) to k^2, respectively. Iterating F, we set
(A_n, B_n) := F^n(X) ∈ k^2
and observe that
𝒢_F, _t_0(X) = - lim_n→∞min{_t_0 A_n, _t_0 B_n}/4^n
from the definition of the escape rate. We let F_t and X_t denote the specializations of F and X at a point t ∈ B(); they are well defined for all but finitely many t. Observe that if F is the standard lift for E then so is F_t for all t.
Fix P ∈ E(k), t_0 ∈ B(), and v∈ M_K. For any choice of lifts F of f and X of x(P), the function
G_P(t;v) := 𝒢_F_t, v(X_t) + 𝒢_F, t_0(X) log|u(t)|_v
extends to a continuous and subharmonic function in a neighborhood of t_0 in B_v^an.
First observe that the conclusion does not depend on the choices of F and X. Indeed,
𝒢_c_t F_t, v(α_t X_t) + 𝒢_c F, t_0(α X) log|u(t)|_v = 𝒢_F_t, v(X_t) + 𝒢_F, t_0(X) log|u(t)|_v
+ 1/3( log|c_t|_v - (_t_0c) log|u(t)|_v)
+ log|α_t|_v - (_t_0α)log|u(t)|_v
for any c, α∈ k^*. So by Lemma <ref> the function G_P(t; v) is continuous and subharmonic for one choice if and only if it is continuous and subharmonic for all choices.
Let F be the standard lift of f. Suppose that P=O. Since F(1,0) = (1,0), we compute that
G_O(t;v)= 𝒢_F_t, v(1,0)+ 𝒢_F, t_0(1,0) log|u(t)|_v ≡ 0.
Now suppose that P ≠ O. Fix t_0∈ B(K) and local uniformizer u at t_0. Choose a lift F of f so that the coefficients of F have no poles at t_0, with F_t_0≠ (0,0). Choose lift X of x(P) so that X_t is well defined for all t near t_0 and X_t_0≠ (0,0). As above, we write
F^n(X) = (A_n, B_n)
and put
a_n = min{_t_0 A_n, _t_0 B_n}
so that a_n≥ 0 for all n and a_0 = 0. Set
F_n(t) = F_t^n(X_t)/u(t)^a_n.
For each place v, we set
h_n,v(t) = logF_n(t)_v/4^n.
By construction, the limit of h_n,v (for t near t_0 with t≠t_0) is exactly the function G_P for these choices. In fact, for t in a small neighborhood of t_0, but with t≠ t_0, the function f_t on ^1 is a well-defined rational function of degree 4; so the specialization of the homogeneous polynomial map F_t satisfies F_t^-1{(0,0)} = {(0,0)}. Furthermore, as the coefficients of F_t depend analytically on t, the functions h_n,v converge locally uniformly to the function G_P away from t=t_0. This can be seen with a standard telescoping sum argument, used often in complex dynamics, as in <cit.>. In particular, G_P is continuous on a punctured neighborhood of t_0.
At the archimedean places v, and for each n, the function h_n,v is clearly continuous and subharmonic in a neighborhood of t_0. At non-archimedean places v, this definition extends to a Berkovich disk around t_0, setting
h_n,v(t) = 1/4^nmax{log[A_n(T)/T^a_n]_t, log[B_n(T)/T^a_n]_t}
where [·]_t is the seminorm on K[[T]] associated to the point t. Each of these functions h_n,v is continuous and subharmonic for t in a Berkovich disk around t_0. (Compare <cit.> Example 8.7, Proposition 8.26(D), and equation (10.9).)
For all v, and by shrinking the radius r if necessary, the functions h_n,v are uniformly bounded from above on the (Berkovich) disk D_r.
As observed above, the functions h_n,v converge locally uniformly away from t=t_0 to the continuous function G_P(t). Choose a small radius r, and let
M_v = sup_n max_|t|_v=r h_n,v(t)
which is finite by the convergence. Because the functions are subharmonic, the Maximum Principle implies that h_n,v(t) ≤ M_v throughout the disk of radius r, for all n. For the non-archimedean places, there is also a Maximum Principle on the Berkovich disk, where the role of the circle of radius r is played by the Type II point associated to the disk of radius r (see <cit.> Proposition 8.14).
We can now complete the proof of Proposition <ref>. As each h_n,v is subharmonic, and the functions are uniformly bounded from above on the disk by Lemma <ref>, we know that the (upper-semicontinuous regularlization of the) limsup of these functions is subharmonic. See <cit.> Proposition 8.26(F) for a proof in the non-archimedean case.
[Proof of Theorem <ref>]
Subharmonicity now follows from Proposition <ref>, Lemma <ref>, and Proposition <ref>. The continuity at each archimedean place is the content of Theorem <ref>. The continuity at each non-archimedean place is a combination of the continuity on the punctured Berkovich disk (as in the proof of Proposition <ref>) and the continuity on Type I (classical) points given in Theorem <ref>.
§.§ The measures on the base
Here we provide more details about the description of the measures appearing in the statement of Corollary <ref>, as discussed in Remark <ref>.
Fix an archimedean place v and any point t_0 ∈ B(). Choosing a uniformizer u at t_0, recall the definition of V_P, t_0, v from (<ref>). We define
μ_P,v := dd^c V_P,t_0, v(t)
on a neighborhood of t_0 in B_v^an; note that this is indepedent of the choice of u. Note that μ_P,v can be expressed as
μ_P,v = dd^c λ̂_E_t, v(P_t)
for t outside of the finitely many points in the support of the divisor D_E(P) or where the fiber E_t is singular. Note, further, that μ_P,v assigns no mass to any individual point t_0, because the potentials are bounded by Theorem <ref>. The details on the metric and the equidistribution theorem in Section <ref> will show that these are exactly the measures that arise as the distribution of the points of small height in Corollary <ref>.
It is well known that the local height function on a smooth elliptic curve is a potential for the Haar measure. That is, for fixed t we have
dd^c λ̂_E_t, v(·) = ω_t - δ_o
where ω_t is the normalized Haar measure on E_t and δ_o is a delta-mass supported at the origin of E_t; see, e.g., <cit.>. We present an alternative proof of this fact related to dynamics as part of Proposition <ref>, as a consequence of Proposition <ref>.
Let E → B be an elliptic surface and P: B → E a section, both defined over a number field K. Let ⊂ E be the union of the finitely many singular fibers in E. For each archimedean place v of K, there is a positive, closed (1,1) current T_v on E∖ with locally continuous potentials so that T_v|_E_t is the Haar measure on each smooth fiber, and P^* T_v is equal to the measure μ_P,v.
As T_v has continuous potentials, the restriction T_v|_E_t and the pullback P^* T_v are well defined. That is, we have T_v|_E_t = dd^c (u|_E_t) where u is a locally defined potential of T_v, and P^*T_v = dd^c (u∘ P) locally on B. The measure μ_P,v has no atoms, so it is determined by T_v along the image of P in E∖.
[Proof of Proposition <ref>]
Let us fix any small neighborhood U in the base curve B() so that all fibers E_t are smooth for t∈ U. Let f_t be the map on ^1 defined in <ref>; by shrinking U if necessary, we can find lifts F_t of f_t that are holomorphic in t ∈ U. From <cit.> (or the proof of <cit.>), we know that the escape rate
_F_t,v(z,w)=lim_n→∞log||F_t^n(z,w)||_v/4^n
is continuous and plurisubharmonic as a function of (t, z,w) ∈ U× (^2∖{(0,0)}). Therefore
dd^c _F_t,v(z,w)
projects to a closed and positive (1,1)-current G_v on the complex surface U×^1, with locally continuous potentials. This current G_v has the property that, restricted to each fiber ^1, its total mass is 1; and the measure on the fiber is the measure of maximal entropy for the rational map f_t <cit.>.
The restriction E|_U of the elliptic surface E over U maps with degree 2 to the complex surface U×^1 by the projection π of (<ref>). The current G_v can be pulled back to E as 1/2 dd^c(g ∘π) where g is a locally-defined continuous potential for G_v. Covering the base of E∖ by sets of the form U, the local definitions glue to form the closed, positive (1,1)-current T_v on E∖.
If P: B → E is a section defined over the number field K, then P^* T_v has potential given locally by
1/2 g∘π∘ P = 1/2 _F_t, v(X_t)
for any lift X_t of π(P_t) ∈^1. Proposition <ref> yields that P^* T_v must coincide with μ_P,v.
Finally, to conclude that T_v|_E_t is equal to the normalized Haar measure ω_t, we may use the well-known dynamical fact that for each fixed t in the base, the measure ω_t projects by π to ^1 to the unique measure of maximal entropy for the map f_t; see, e.g., <cit.>.
§ THE ADELIC METRIC AND EQUIDISTRIBUTION
In this section we give the proofs of Theorem <ref> and Corollary <ref>.
We first outline the proofs. Let E → B be an elliptic surface defined over a number field K with zero section O: B → E, and let P:B→ E be a section also defined over K so that ĥ_E(P) ≠0. Recall from <ref> that we introduced a -divisor
D_E(P) = ∑_γ∈ B()λ̂_E, _γ(P) · (γ)
on B. By enlarging K, we may assume that D_E(P) lies in B(K). We will define an adelic metric on the ample line bundle associated to the divisor D_E(P), inducing a height function h_ such that
h_(t) = ĥ_E_t(P_t) t ∈ B()
and
h_(t) ≥ 0 t ∈ B().
Applying Silverman's results on the variation of canonical height, Theorems <ref> and <ref>, we will deduce that the metric is continuous and adelic. From Theorem <ref>, we will conclude that the metric is also semi-positive in the sense of Zhang <cit.>. We will use Zhang's inequalities <cit.> to deduce that
h_(B) = 0.
Consequently, we will be able to apply the equidistribution results of Chambert-Loir, Thuillier, and Yuan <cit.> to complete our proofs.
§.§ The metric and its properties
Let m∈ℕ be such that
D = m· D_E(P)
is an integral divisor. Let _m be the associated line bundle on B. Note that (_m)=m ĥ_E(P)>0 so ℒ_m is ample; by replacing m with a multiple, we may assume that _m is very ample.
Fix a place v of K. Let U be an open subset of B_v^an. Each section s ∈_m(U) is identified with a meromorphic function f on U satisfying
(f) ≥ -D.
We set
s(t)_v = {[ e^-mλ̂_E_t,v(P_t)|f(t)|_v f(t); 0 _t f > - m λ̂_E, _t(P); e^-m V_P, t, v(t) , ].
taking the locally-defined uniformizer u = f^1/_t f at t in the definition of V_P, t, v from (<ref>).
The metric · = {·_v}_v ∈ M_K on _m is continuous, semipositive, and adelic.
The continuity and semipositivity follows from Theorem <ref>. (In <cit.>, semipositivity of a continuously metrized line bundle on a curve is defined terms of subharmonicity of potentials for the curvature form at each archimedean place, and as a uniform limit of “smooth semipositive" metrics at each non-archimedean place. In <cit.>, it is established that subharmonicity of potentials is a sufficient notion at all places, and he proves in <cit.> that this notion of semipositivity coincides with that of Zhang <cit.>. See also <cit.> where this same argument is applied in a dynamical context.) The adelic condition follows from Theorem <ref>.
§.§ The associated height function
A height function on B() is defined by setting
h_P(t) := 1/m 1/| (/K)· t| ∑_s ∈(/K)· t∑_v∈ M_K -n_v logϕ(s)_v
where ϕ is any global section of _m which is nonvanishing along the Galois orbit of t, and ·_v is the metric of <ref>. Recall that D_E(P) ⊂ B(K); we may assume that our sections ϕ are defined over K, and the product formula guarantees our height is independent of the choice of ϕ.
Our next goal is to prove the following two important facts about this height function h_P.
The height function h_P satisfies
h_P(t)= ĥ_E_t(P_t)
for all t∈ B() such that the fiber E_t is smooth.
The height function h_P satisfies
h_P(t) ≥ 0
for all t ∈ B().
[Proof of Proposition <ref>]
First fix t∈ B()∖ D_E(P) with smooth fiber E_t. Choose a section ϕ defined over K that does not vanish along the Galois orbit of t, and let f be the associated meromorphic function on B. Then f takes finite and nonzero values along the Galois orbit of t. We have,
h_P(t) = 1/m 1/| (/K)· t|∑_s ∈(/K)· t ∑_v ∈ M_K n_v (m λ̂_E_s, v(P_s) - log|f(s)|_v)
= 1/m 1/| (/K)· t|∑_s ∈(/K)· t ∑_v ∈ M_K m n_v λ̂_E_s, v(P_s)
= ĥ_E_t(P_t).
where the second equality follows from the product formula.
For t_0∈ D_E(P) such that E_t_0 is smooth, it is necessarily the case that P_t_0 = O_t_0, and therefore ĥ_E_t_0(P_t_0) = 0. To compute h_P(t_0), observe that t_0 ∈ B(K) so its Galois orbit is trivial; fixing a uniformizer u∈ K(B) at t_0, we have
h_P(t_0) = ∑_v∈ M_K n_v V_P, t_0, v(t_0)
where V_P,t_0, v is the function of (<ref>) associated to the uniformizer u.
We can compute h_P(t_0) using the dynamical interpretation of the local heights, described in Section <ref>. Fix a Weierstrass equation for E in a neighborhood of t_0 and write P=(x_P,y_P). The assumption that P_t_0=O_t_0 is equivalent to _t_0x_P<0. After possibly shrinking U, write x_P=(u)^_t_0(x_P)A_0 for the chosen uniformizer u and a function A_0∈ K(B) that does not vanish in U. We choose a lift X of x_P on U defined as X=(A_0,B_0), where B_0:=(u)^-_t_0(x_P). Notice that A_0 and B_0 are regular at t_0. Let F be the standard lift in these coordinates, defined in (<ref>); it satisfies F_t_0(1,0) = (1,0), and we have _F,_t_0(A_0,B_0)=0. Since _t_0Δ_E=0, Proposition <ref> implies that
V_P,t_0,v(t)=1/2 _F_t,v(A_0(t),B_0(t))-1/12 log|Δ_E(t)|_v
for all t ∈ U. Therefore,
V_P, t_0, v(t_0) = 1/2 _F_t_0,v(A_0(t_0),0) -1/12 log|Δ_E(t_0)|_v
= 1/2 lim_n→∞1/4^nlogF^n_t_0(A_0(t_0),0)_v - 1/12 log|Δ_E(t_0)|_v
= 1/2 lim_n→∞1/4^nlog |A_0(t_0)^4^n|_v - 1/12 log|Δ_E(t_0)|_v
= 1/2 log|A_0(t_0)|_v - 1/12 log|Δ_E(t_0)|_v.
The product formula now yields that h_P(t_0)=0, as claimed.
To prove Proposition <ref>, we first reduce to the case that the elliptic surface E→ B has semi-stable reduction; that is all of its fibers are either smooth or have multiplicative reduction. The next lemma describes how the height associated with the divisor D_E(P) behaves under base extensions of the elliptic surface E→ B. It is adapted from <cit.>. We include it here for completeness.
Let μ: B'→ B be a finite map of smooth projective curves, let E'→ B' be a minimal model for E×_B B', and let P':B'→ E' be the extension of the section P. For each t_0 ∈ B() and t_0'∈μ^-1({t_0})⊂ B'(_v), there is a neighborhood U of t_0' in B'(_v) and a regular non-vanishing function f on U such that
V_P,t_0,v(μ(t'))-V_P',t_0',v(t')=log|f(t')|_v
on U∖{t'_0}.
In particular,
V_P,t_0,v(t_0)-V_P',t_0',v(t'_0)=log|f(t'_0)|_v.
Let u be a uniformizer at t_0, u' a uniformizer at t_0' and n=_t'_0(μ^*u). Since local heights are invariant under base extension we have
λ̂_E', _t'_0(P')=n λ̂_E, _t_0(P).
Notice that for all t' in a punctured neighborhood of t'_0 the fibers E'_t' are smooth. Hence the map E'→ E gives an isomorphism between the fibers E'_t'→ E_μ(t'). Under this isomorphism P'_t'∈ E'_t' is mapped to P_μ(t')∈ E_μ(t'). Invoking now the uniqueness of the Néron local heights, we have
λ̂_E_μ(t'),v(P_μ(t'))=λ̂_E'_t',v(P'_t').
Combining (<ref>) and (<ref>) we get that for t' in a punctured neighborhood of t_0',
V_P,t_0,v(μ(t'))=V_P',t_0',v(t')+λ̂_E,_t_0(P)log|u(μ(t'))/u'^n(t')|_v.
The definition of n yields that the function f(t')=(u(μ(t'))/u'^n(t'))^λ̂_E, _t_0(P) is regular and non-vanishing at t_0'. The first part of the lemma follows.
Finally, Theorem <ref> allows us to conclude that
V_P,t_0,v(μ(t'_0))-V_P',t_0',v(t'_0)=log|f(t'_0)|_v
at the point t'_0, as claimed.
The following lemma will allow us to prove Proposition <ref> in the case that a fiber has multiplicative reduction. The proof is lengthy, but it is merely a collection of computations using the explicit formulas for the local height functions, as in <cit.>.
Let E→ B be an elliptic surface and let P:B→ E be a non-zero section defined over K. Then there exists a finite extension L of the number field K so that, for each t_0∈ B() such that E_t_0 has multiplicative reduction, there exists an x(t_0) ∈ L^* so that
V_P,t_0,v(t_0) = log|x(t_0)|_v
at all places v of L.
We let
E:y^2=x^3+ax+b,
be a minimal Weierstrass equation for E over an affine subset W⊂ B defined over K with t_0∈ W. Here a,b∈ K(B) are regular functions at t_0.
Using this Weierstrass equation we write
P=(x_P,y_P),
where x_P,y_P∈ K(B).
Since E→ B has multiplicative reduction over t_0∈ B(K), we have
N:=_t_0Δ_E≥ 1 and min{_t_0a,_t_0b}=0.
Let v be a place of K (archimedean or non-archimedean). We denote by j_E the j-invariant of E→ W, given by
j_E(t)=1728(4a(t))^3/Δ_E(t).
Notice that equation (<ref>) yields that j_E has a pole at t_0. Hence, we can find a v-adic open neighborhood U of t_0 and an analytic map
ψ: U→{q∈_v : |q|_v<1},
such that the following holds: If j is the modular j-invariant <cit.>, then we have
j_E(t)=j(ψ(t)) and _t_0ψ=N.
The function ψ(t) is given as
ψ(t)=1/j_E(t)+744/j^2_E(t)+750420/j^3_E(t)+…∈ℤ[[(j_E(t))^-1 ]].
In the following, we choose a uniformizer u ∈ K(B) at t_0, and we identify ψ with its expression ψ(t)∈_v[[u]] and write
ψ(t)=β u(t)^N+u(t)^N+1f(t), for t∈ U∖{t_0}.
Equation (<ref>) yields that β∈ K∖{0} and f(t)∈ K[[u]].
Following the proof of <cit.> and after possibly shrinking U we have isomorphisms
E_t(_v) _v^*ψ(t)^ C_ψ(t):y^2=4x^3-g_2(ψ(t))x-g_3(ψ(t)),
for t∈ U∖{t_0}.
Under these isomorphisms, we have
P_t ↦ w(t) ↦ (℘(w(t),ψ(t)),℘'(w(t),ψ(t))).
Here g_2,g_3 are the modular invariants, given by their usual q-series
g_2(q)=1/12(1+240∑_n=1^∞n^3q^n/1-q^n), g_3(q)=1/216(-1+504∑_n=1^∞n^5q^n/1-q^n)
and ℘ is the Weierstrass ℘-function given by
℘(w,q)=1/12+∑_n∈q^nw/(1-q^nw)^2-2∑_n=1^∞nq^n/1-q^n, ℘'(w,q)=∑_n∈q^nw(1+q^nw)/(1-q^nw)^3.
In view of <cit.>, after possibly replacing P by -P, we may assume that w:U→_v is an analytic map satisfying
0≤_t_0w≤1/2_t_0ψ.
In the following we identify w with its series in _v[[u]] and write
w(t)= u^m(t)+u^m+1(t)g(t),
where ∈_v and g(t)∈_v[[u]].
We claim that w(t)∈K[[u]]. To see this, notice that from <cit.> we have that for t∈ U
(℘(w(t),ψ(t)),℘'(w(t),ψ(t))=(ν^-2(t)x_P(t),2ν^-3(t)y_P(t)),
where
ν(t)^12=Δ_E(t)/Δ(ψ(t)).
In the equation above Δ denotes the modular discriminant given by
Δ(q)=g_2(q)^3-27g_3(q)^3.
Since the functions ψ,Δ_E and Δ are defined over K, we have that Y(t):=2ν^-3(t)y_P(t) is also defined over K. Since Y(t)=℘'(w(t),ψ(t))∈ K[[u]] and ψ(t)∈ K[[u]] we get that w(t)∈ K[[u]].
Therefore, there are non-zero constants ,β,γ∈K∖{0}, non-negative integers k,m∈ and functions f(t), g(t), h(t)∈K[[u]] such that for all t∈ U
ψ(t)=β u^N(t)+f(t)u^N+1(t), w(t)= u^m(t)+g(t)u^m+1, 1-w(t)=γ u^k(t)+h(t)u^k+1(t).
Next, we aim to express x(t_0) (as in the statement of the lemma) in terms of ,β,γ∈K.
Using the isomorphisms in <ref>, the uniqueness of the local canonical heights and the explicit formulas for the local canonical heights <cit.>, we get
λ̂_E_t,v(P_t)=λ̂(w(t),ψ(t)) =-1/2B_2(log|w(t)|_v/log|ψ(t)|_v)log|ψ(t)|_v-log|1-w(t)|_v
-∑_n≥ 1log|(1-ψ(t)^nw(t))(1-ψ(t)^nw(t)^-1)|_v,
where B_2(s)=s^2-s+1/6 is the second Bernoulli polynomial.
Since _t_0ψ=N≥ 1 and using (<ref>), we get
lim_tv→ t_0∑_n≥ 1log|(1-ψ(t)^nw(t))(1-ψ(t)^nw(t)^-1)|_v=0.
In what follows, for F(t)∈_v[[u]] we write
F(t):=o_v(1), if lim_tv→ t_0F(t)=0.
In view of <cit.>, we have
B_2(log|w(t)|_v/log|ψ(t)|_v)log|ψ(t)|_v =log^2|w(t)|_v/log|ψ(t)|_v-log|w(t)|_v+1/6log|ψ(t)|_v
=m^2/Nlog|u(t)|_v+m/N^2log(||_v^2N/|β|_v^m)-log||_v
-mlog|u(t)|_v+log|β|_v/6+N/6log|u(t)|_v+o_v(1)
Using equations (<ref>) and (<ref>), equation (<ref>) yields
λ̂_E_t,v(P_t)+1/2(m^2/N-m+N/6+2k)log|u(t)|_v =-1/2(m/N^2log(||_v^2N/|β|_v^m)-log||_v+log|β|_v/6)
-log|γ|_v+o_v(1).
Finally, notice that <cit.> implies
λ̂_E,_t_0(P)=_t_0(1-w)+1/2B_2(_t_0 w/_t_0ψ)_t_0ψ=1/2(m^2/N-m+N/6+2k).
Therefore
V_P,t_0,v(t_0) =lim_tv→ t_0V_P,t_0,v(t)=-1/2(m/N^2log(||_v^2N/|β|_v^m)-log||_v+log|β|_v/6)-log|γ|_v
=log|x(t_0)|_v,
where x(t_0)=β^m^2/2N^2-1/2/γ^m/N-1/2 belongs to a finite extension of K, denoted by L.
[Proof of Proposition <ref>]
By <cit.> there is a finite map of smooth projective curves B'→ B such that if E'→ B' is a minimal model for E×_BB', then E' has semi-stable reduction over the singular fibers of E→ B. Moreover, we may choose B' so that everything is defined over K. Thus, by Lemma <ref> and using the product formula, we may assume that the singular fibers of our elliptic surface E → B have multiplicative reduction.
For all t ∈ B() for which E_t is smooth, we know from Proposition <ref> that h_P(t) = ĥ_E_t(P_t). The canonical height is always non-negative, so we may conclude that h_P(t)≥ 0 for all such t.
Assume now that t_0 ∈ B() has a fiber with multiplicative reduction. Enlarging the number field K if necessary we may assume that t_0∈ B(K) and that its corresponding x(t_0) defined in the statement of Lemma <ref> is in K^*. Then, on using the product formula, Lemma <ref> implies that h_P(t_0) = 0. This completes the proof.
§.§ Proofs of Theorem <ref> and Corollary <ref>
[Proof of Theorem <ref>]
Let _P be the line bundle on B induced from the divisor D_E(P). From Theorem <ref>, we know that its m-th tensor power can be equipped with a continuous, adelic, semipositive metric, so that the corresponding height function is (a multiple of) the canonical height ĥ_E_t(P_t) on the smooth fibers. Thus, by pulling back the metric to _P, we obtain a continuous, semipositive, adelic metric on _P inducing the desired height function.
It remains to show that this height h_P satisfies h_P(B) = 0. This is a consequence of Propositions <ref> and <ref> and Zhang's inequalities on successive minima <cit.>. Recall that, since ĥ_E(P) ≠0, we know that there are infinitely many t∈ B() for which
ĥ_E_t(P_t) = 0.
(For a complex-dynamical proof, see <cit.>.) Therefore, from Proposition <ref>, we may deduce that the essential minimum of h_P on B is equal to 0. On the other hand, from Proposition <ref>, we know that h_P(t) ≥ 0 for all t ∈ B(). Therefore, from <cit.>, we may conclude that h_P(B) = 0.
[Proof of Corollary <ref>]
When combined with the equidistribution theorems of Yuan and Thullier <cit.>, we immediately obtain the corollary from Theorem <ref>. The measures μ_P,v are the curvature distributions associated to the metrics ·_v at each place v. From the definition of the metric in <ref>, we see that they are given locally by
μ_P,v = dd^c V_P,t_0, v(t)
in a v-adic neighborhood of any point t_0∈ B(), and for any choice of uniformizer u at t_0.
§ PROOF OF THEOREM <REF>.
§.§ Reduction to the case of a fiber product of elliptic surfaces
We first show that, to prove the theorem, it suffices to prove the result for sections of the fiber product A = E_1 ×_B⋯×_BE_m of m≥ 2 elliptic surfaces E_i→ B over the same base, and to assume that the line bundle is generated by the divisor
{O_E_1}× E_2×⋯ E_m+E_1×{O_E_2}×⋯× E_m+⋯ + E_1× E_2×⋯×{O_E_m}.
Let B be a quasiprojective smooth algebraic curve defined over . Suppose A → B is family of abelian varieties defined over that is isogenous to a fibered product of m≥ 2 elliptic curves. That is, there is a branched cover B' → B and m≥2 elliptic surfaces E_i → B' that give rise to an isogeny
E_1×_B'⋯×_B' E_m → A
over B'.
Now let be a line bundle on A which restricts to an ample and symmetric line bundle on each fiber A_t for t∈ B. Then the line bundle pulls back to a line bundle ' on E_1×_B'⋯×_B' E_m, and it again restricts to an ample and symmetric line bundle on each fiber over t ∈ B'.
Now suppose that we have a section P: B → A. The section P pulls back to a section P': B' → A, and this in turn pulls back to a (possibly multi-valued) section of E_1×_B'⋯×_B' E_m. If multi-valued, we can perform a base change again, passing to a branched cover B”→ B', so that the induced section P”: B”→ E_1×_B”⋯×_B” E_m is well defined. By definition, the assumption that P is non-special on A means that it is non-special as a section of E_1×_B”⋯×_B” E_m.
Finally, we observe that the conclusion of Theorem <ref> does not depend on the choice of line bundle. (We thank Joe Silverman for his help with this argument.) Recall that, on any abelian variety A defined over , the notion of a “small sequence" of points is independent of the choice of ample and symmetric line bundle. That is, if we take two ample and symmetric divisors D_1 and D_2, then we know that there exists an integer m_1>0 so that m_1 D_1 - D_2 is ample; similarly there exists m_2>0 so that m_2 D_2 - D_1 is ample. It follows from properties of the Weil height machine that the heights h_D_1 and h_D_2 will then satisfy
1/m_1 h_D_2 + C_1 ≤ h_D_1≤ m_2 h_D_2 + C_2
for real constants C_1, C_2. Upon passing to the canonical height, we conclude that
1/m_1ĥ_D_2≤ĥ_D_1≤ m_2 ĥ_D_2
on the abelian variety. In particular, ĥ_D_1(a_i) → 0 for some sequence in A() if and only if ĥ_D_2(a_i) → 0. Now suppose we have a family of abelian varieties A → B. Two line bundles _1 and _2 associated to relatively ample and symmetric divisors induce a canonical heights ĥ__1,t and ĥ__2,t on each fiber A_t. But recalling that amplitude persists on Zariski open sets <cit.>, there exist positive integers m_1 and m_2 so that the line bundles _1^m_1⊗_2^-1 and _2^m_2⊗_1^-1 are relatively ample on a Zariski open subset of the base B. Passing to the canonical heights once again, we find that the relation (<ref>) holds uniformly over B (after possibly excluding finitely many points). Therefore, for any section P: B → A, there exists a positive constant c(_1, P) of Theorem <ref> for height ĥ__1 if and only if it there exists such a constant c(_2, P) for ĥ__2.
§.§ Proof for a fiber product of elliptic curves
Fix integer m≥ 2, and let E_i → B for i=1,…,m be elliptic surfaces over the same base curve B, defined over . Let A = E_1×_B⋯×_B E_m, and let be the line bundle on E_1×_B ⋯×_B E_m associated to the divisor
D = {O_E_1}× E_2×⋯ E_m+E_1×{O_E_2}×⋯× E_m+⋯ + E_1× E_2×⋯×{O_E_m}.
For all but finitely many t ∈ B(), the canonical height ĥ__t on the fiber A_t is easily seen to be the sum of canonical heights (see, e.g., <cit.> for properties of the height functions), so that
ĥ__t = ∑_i=1^mĥ_E_i,t.
Now assume that P = (P_1, …, P_m) is a section of A→ B. Define
ĥ_i(t) := ĥ_E_i,t(P_i(t))
for i = 1,…, m and for all t ∈ B() where all E_i,t are smooth elliptic curves. Suppose there exists an infinite sequence {t_n}⊂ B() for which
ĥ_i(t_n) → 0 for all i=1,…,m.
as n→∞. We will prove that for every pair (i,j), there exists an infinite sequence {s_n}⊂ B() so that
ĥ_i(s_n) = ĥ_j(s_n) = 0
for all n. In this way, we reduce our problem to the main results of <cit.> which imply that the pair (P_i, P_j) must be a special section of E_i ×_B E_j. Finally, we observe that our definition of a special section P = (P_1, P_2, …, P_m) is equivalent to the statement that every pair (P_i, P_j) is special. Therefore, for any non-special section P, we can conclude that there exists a constant c = c(P) >0 so that the set
{t ∈ B(): ĥ__t(P_t) < c}
is finite.
Fix a pair (i,j). First assume that neither E_i nor E_j is isotrivial. If P_i or P_j is torsion, then the section (P_i, P_j) is special. Otherwise, we have ĥ_E_i(P_i) ≠0 and ĥ_E_j(P_j) ≠0, and we may apply Theorem <ref> to deduce that the height functions h_i and h_j are “good" on B. More precisely, we let M_i and M_j be the adelically metrized line bundles on the base curve B associated to the height functions ĥ_i and ĥ_j, from Theorem <ref>. They are both equipped with continuous adelic metrics of non-negative curvature. By assumption, we have
ĥ_i(t_n) → 0 ĥ_j(t_n) → 0
as n→∞. Therefore, we may apply the observation of Chambert-Loir <cit.>, which builds upon on Zhang's inequalities <cit.>, to conclude that there exist integers n_i and n_j so that M_i^n_i and M_j^n_j are isomorphic as line bundles on B and their metrics are scalar multiples of one another. It follows that the height functions ĥ_i and ĥ_j are the same, up to scale, and in particular they have the same zero sets. In other words, P_i(t) is a torsion point on E_i,t if and only if P_j(t) is a torsion point on E_j,t (for all but finitely many t in B), and there are infinitely many such parameters t∈ B().
Now suppose that E_i is isotrivial. The existence of the small sequence t_n in (<ref>) implies that either ĥ_E_i(P_i) ≠0 or P_i is torsion on E_i, and furthermore, if P_i is torsion, then it follows that (P_i, P_j) is a special section of E_i×_B E_j. Similarly if E_j is isotrivial. In other words, the existence of the sequence t_n in (<ref>) allows us to conclude that either (P_i,P_j) is a special pair, or we have that both ĥ_E_i(P_i) ≠0 and ĥ_E_j(P_j) ≠0. Therefore, we may proceed as above in the nonisotrivial case, applying Theorem <ref> to deduce that the heights ĥ_i and ĥ_j coincide, up to scale, and in particular there are infinitely many parameters s∈ B() where
ĥ_i(s) = ĥ_j(s) = 0.
This concludes the proof of Theorem <ref>.
§ VARIATION OF CANONICAL HEIGHT, ILLUSTRATED
In this final section, we provide a few illustrations of the distributions μ_P,v for an archimedean place v, arising in Corollary <ref>. In Proposition <ref>, we present a complex-dynamical proof that the archimedean measures μ_P,v will have support equal to all of B.
§.§ Images
Given E→ B and section P, we plot the parameters t where P_t is a torsion point on the fiber E_t of specified order. As proved in Corollary <ref>, the local height function at each place
t ↦λ̂_E_t,v(P_t)
determines the distribution of the torsion parameters; it is a potential for the measure μ_P,v (away from the singularities). Recall that if we have two sections P and Q that are linearly related on E, then the distributions of their torsion parameters in B will be the same.
Figure <ref>, top, illustrates the example of Silverman from <cit.>. Here, we have
E_t = {y^2 + xy/t + y/t = x^3 + 2 x^2/t}
with B = ^1 and P_t = (0,0) in (x,y)-coordinates. Plotted are the torsion parameters of orders 2^n for all n≤ 8; that is, the points t in the base B where P_t is torsion of order 2^n on the fiber E_t. Roughly, a smaller yellow dot corresponds to higher order of torsion. Figure <ref>, bottom, is another section of the same family, where the x-coordinate of P_t is constant and equal to -1/4. (Strictly speaking, this second P is not a section of our given E→^1, because the y-coordinate will not lie in (B) ≃ℚ(t) but in an extension; however, the property of being torsion and the determination of its order is independent of which point in the fiber we choose.) Observe the distinctly different pattern of yellow dots in the first and second pictures, especially in the left half of the two pictures, illustrating the linear independence in E(k) of the two sections.
Figure <ref> illustrates the torsion parameters for two independent sections of the Legendre family,
E_t = {y^2 = x(x-1)(x-t)}
over B = ^1, studied in <cit.>. The chosen sections are P_2, with constant x-coordinate equal to 2, and P_5, with constant x-coordinate equal to 5. As in Figure <ref>, we plot the torsion parameters of orders 2^n for all n≤ 8; generally, a smaller yellow dot signifies higher order of torsion. It was proved in <cit.> that the limiting distributions for sections with constant x-coordinate satisfy μ_P_x,∞ = μ_P_x', ∞ (at an archimedean place) if and only if x=x'. It was proved in <cit.> and <cit.> that there are no t ∈^1() for which both (P_2)_t and (P_5)_t are torsion on E_t. Again, observe the difference in the geometry of the yellow dots for the two independent sections.
Figure <ref> illustrates our equidistribution result, Corollary <ref>, for the example of the Legendre family with the section P_5. Plotted are the torsion parameters of orders 2^n with (a) n≤ 6, (b) n≤ 8, and (c) n≤ 10. Observe how the yellow dots fill in the “grid structure" in the base curve B, exactly as do the torsion points for one elliptic curve.
As mentioned above, the smaller yellow dots in the illustrations correspond, roughly, to higher orders of torsion. These images are produced with a standard escape-rate algorithm. We use the dynamical system f_t on ^1, induced from multiplication by 2 on the elliptic curve E_t from Section <ref>, line (<ref>). The coordinates on ^1 are chosen so that ∞ is the image of the 0 of E_t. We mark t yellow if |f^n_t(π(P_t))| ≥ 10000 for some n≤ 8.
§.§ Density of torsion parameters
In all of these examples, the yellow dots will fill in the picture as the order of torsion grows, and the support of the measures μ_P,v is equal to B(). In fact, this will always be the case, for any (nontrivial) section of a complex elliptic surface, as our final result, Proposition <ref>, shows.
Let E→ B be an elliptic surface over a projective curve B, defined over , and let P: B→ E a section for which ĥ_E(P) ≠0 (over the function field k = (B)). Let μ_P be the measure on B defined as in Proposition <ref>, as the pullback of the current T that restricts to Haar measure on each smooth fiber. In other words, μ_P is locally defined as the Laplacian of the function G_P(t) introduced in Proposition <ref>, which is well defined when working over .
Let E→ B be an elliptic surface over a projective curve B, defined over , and let P: B→ E be a section for which ĥ_E(P) ≠0 (over the function field k = (B)). Then the set
{t ∈ B: P_t E_t}
is dense in B() and
μ_P = B().
We give a complex-dynamical proof, viewing Proposition <ref> as a consequence of the main result of <cit.>. (We do not use the equidistribution result, Corollary <ref>.) An analytic proof is also presented in <cit.>.
Let B^*⊂ B be a finitely-punctured Riemann surface such that the fiber E_t is smooth for all t ∈ B^*. Let π_t: E_t→^1 be the degree-two projection and f_t: ^1 →^1 be the rational map induced by multiplication-by-2 on E_t, as defined in the introduction to Section <ref>. It is well known that the holomorphic family {f_t: t∈ B^*} is structurally stable; see, e.g., <cit.>. Thus, over any simply-connected subset U of B^*, there is a holomorphic motion of the periodic points of f_t which extends uniquely to a holomorphic motion of all of ^1, conjugating the dynamics.
The key observation is that μ_P is precisely the “bifurcation measure" of the pair (f,P) on B^*. See <cit.> and <cit.> for definitions. The support of μ_P is equal to the bifurcation locus of (f,P); in particular, the parameters t∈ B^* for which π_t(P_t) is preperiodic for f_t are dense in μ_P. Therefore, it suffices to show that μ_P = B.
Suppose to the contrary that there is an open disk U ⊂ B^* for which μ_P(U) = 0. Then the pair (f,P) is stable on U, and therefore π_t(P_t) cannot be a repelling periodic point for any t ∈ U. From the uniqueness of the holomorphic motion, it follows that t↦π_t(P_t) is part of the holomorphic motion on U. By analytic continuity, then, we deduce that π_t(P_t) must follow the motion of a point over all of B^*. This implies that the pair (f,P) is stable throughout B^* and the measure μ_P is 0. But this is absurd by the assumption that ĥ_E(P) ≠0; see <cit.>.
'
DWY
[BR]BRbook
M. Baker and R. Rumely.
Potential theory and dynamics on the Berkovich projective
line, volume 159 of Mathematical Surveys and Monographs.
American Mathematical Society, Providence, RI, 2010.
[BH]Branner:Hubbard:1
B. Branner and J. H. Hubbard.
The iteration of cubic polynomials. I. The global topology
of parameter space.
Acta Math. 160(1988), 143–206.
[CS]Call:Silverman
G. S. Call and J. H. Silverman.
Canonical heights on varieties with morphisms.
Compositio Math. 89(1993), 163–205.
[CL1]ChambertLoir:equidistribution
A. Chambert-Loir.
Mesures et équidistribution sur les espaces de Berkovich.
J. Reine Angew. Math. 595(2006), 215–235.
[CL2]ChambertLoir:survey
Antoine Chambert-Loir.
Heights and measures on analytic spaces. A survey of recent
results, and some remarks.
In Motivic integration and its interactions with model theory
and non-Archimedean geometry. Volume II, volume 384 of London
Math. Soc. Lecture Note Ser., pages 1–50. Cambridge Univ. Press, Cambridge,
2011.
[DG]DG:rationality
L. DeMarco and D. Ghioca.
Rationality of dynamical canonical height.
Preprint, 2016.
[De1]D:stableheight
Laura DeMarco.
Bifurcations, intersections, and heights.
Algebra Number Theory 10(2016), 1031–1056.
[De2]D:KAWA
Laura DeMarco.
KAWA 2015: Dynamical moduli spaces and elliptic curves.
To appear, Ann. Fac. Sci. Toulouse Math.
[DWY]DWY:Lattes
Laura DeMarco, Xiaoguang Wang, and Hexi Ye.
Torsion points and the Lattès family.
Amer. J. Math. 138(2016), 697–732.
[FG]Favre:Gauthier:cubics
Charles Favre and Thomas Gauthier.
Classification of special curves in the space of cubic polynomials.
Preprint, 2016.
[FS]Fornaess:Sibony
J. E. Fornæss and N. Sibony.
Complex dynamics in higher dimensions.
In Complex Potential Theory (Montreal, PQ, 1993), pages
131–186. Kluwer Acad. Publ., Dordrecht, 1994.
[HS]Hindry:Silverman
Marc Hindry and Joseph H. Silverman.
Diophantine geometry, volume 201 of Graduate Texts in
Mathematics.
Springer-Verlag, New York, 2000.
An introduction.
[HP]Hubbard:Papadopol
J. Hubbard and P. Papadopol.
Superattractive fixed points in C^n.
Indiana Univ. Math. J. 43(1994), 321–365.
[La1]Lang:Diophantine
Serge Lang.
Fundamentals of Diophantine geometry.
Springer-Verlag, New York, 1983.
[La2]Lang:Arakelov
Serge Lang.
Introduction to Arakelov theory.
Springer-Verlag, New York, 1988.
[La3]Lazarsfeld:Positivity:I
Robert Lazarsfeld.
Positivity in algebraic geometry. I, volume 48 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of
Modern Surveys in Mathematics.
Springer-Verlag, Berlin, 2004.
[Ly]Lyubich:entropy
M. Lyubich.
Entropy properties of rational endomorphisms of the Riemann
sphere.
Ergodic Theory Dynamical Systems 3(1983), 351–385.
[MZ1]Masser:Zannier
D. Masser and U. Zannier.
Torsion anomalous points and families of elliptic curves.
Amer. J. Math. 132(2010), 1677–1691.
[MZ2]Masser:Zannier:2
D. Masser and U. Zannier.
Torsion points on families of squares of elliptic curves.
Math. Ann. 352(2012), 453–484.
[MZ3]Masser:Zannier:nonsimple
D. Masser and U. Zannier.
Torsion points on families of products of elliptic curves.
Adv. Math. 259(2014), 116–133.
[MZ4]Masser:Zannier:simpleA
David Masser and Umberto Zannier.
Torsion points on families of simple abelian surfaces and Pell's
equation over polynomial rings.
J. Eur. Math. Soc. (JEMS) 17(2015), 2379–2416.
With an appendix by E. V. Flynn.
[Ma]Mavraki:Weierstrass
Niki Myrto Mavraki.
Impossible intersections in a Weierstrass family of elliptic
curves.
J. Number Theory 169(2016), 21–40.
[Mc]McMullen:CDR
C. McMullen.
Complex Dynamics and Renormalization.
Princeton University Press, Princeton, NJ, 1994.
[Mi]Milnor:Lattes
John Milnor.
On Lattès maps.
In Dynamics on the Riemann sphere, pages 9–43. Eur. Math.
Soc., Zürich, 2006.
[Mu]Mumford:Abelian Varieties
David Mumford.
Abelian varieties.
Oxford University Press, London 1970.
[Ra1]Raynaud:1
M. Raynaud.
Courbes sur une variété abélienne et points de torsion.
Invent. Math. 71(1983), 207–233.
[Ra2]Raynaud:2
M. Raynaud.
Sous-variétés d'une variété abélienne et points de
torsion.
In Arithmetic and geometry, Vol. I, volume 35 of Progr. Math., pages 327–352. Birkhäuser Boston, Boston, MA, 1983.
[Si1]Silverman:VCHI
Joseph H. Silverman.
Variation of the canonical height on elliptic surfaces. I. Three
examples.
J. Reine Angew. Math. 426(1992), 151–178.
[Si2]Silverman:Advanced
Joseph H. Silverman.
Advanced Topics in the Arithmetic of Elliptic Curves, volume
151 of Graduate Texts in Mathematics.
Springer-Verlag, New York, 1994.
[Si3]Silverman:VCHII
Joseph H. Silverman.
Variation of the canonical height on elliptic surfaces. II.
Local analyticity properties.
J. Number Theory 48(1994), 291–329.
[Si4]Silverman:VCHIII
Joseph H. Silverman.
Variation of the canonical height on elliptic surfaces. III.
Global boundedness properties.
J. Number Theory 48(1994), 330–352.
[Si5]Silverman:Elliptic
Joseph H. Silverman.
The Arithmetic of Elliptic Curves, volume 106 of Graduate
Texts in Mathematics.
Springer, Dordrecht, second edition, 2009.
[St]Stoll:torsion
Michael Stoll.
Simultaneous torsion in the Legendre family.
To appear, Exper. Math. Available online
doi = 10.1080/10586458.2016.1201443.
[SUZ]Szpiro:Ullmo:Zhang
L. Szpiro, E. Ullmo, and S. Zhang.
Équirépartition des petits points.
Invent. Math. 127(1997), 337–347.
[Ta]Tate:variation
J. Tate.
Variation of the canonical height of a point depending on a
parameter.
Amer. J. Math. 105(1983), 287–294.
[Th]Thuillier:these
Amaury Thuillier.
Théorie du potentiel sur les courbes en géométrie
analytique non archimedienne. Applications à la théorie d'Arakelov.
Thèse, Université de Rennes 1, 2005.
[Ul]Ullmo:Bogomolov
Emmanuel Ullmo.
Positivité et discrétion des points algébriques des courbes.
Ann. of Math. (2) 147(1998), 167–179.
[Yu]Yuan:equidistribution
Xinyi Yuan.
Big line bundles over arithmetic varieties.
Invent. Math. 173(2008), 603–649.
[Za]Zannier:book
Umberto Zannier.
Some problems of unlikely intersections in arithmetic and
geometry, With appendixes by David Masser.
Annals of Mathematics Studies. Volume 181, Princeton University Press, Princeton, NJ, 2012.
[Zh1]Zhang:positive
Shou-Wu Zhang.
Positive line bundles on arithmetic varieties.
J. Amer. Math. Soc. 8(1995), 187–221.
[Zh2]Zhang:ICM
Shou-Wu Zhang.
Small points and Arakelov theory.
In Proceedings of the International Congress of
Mathematicians, Vol. II (Berlin, 1998), number Extra Vol. II, pages
217–225 (electronic), 1998.
[Zh3]Zhang:Bogomolov
Shou-Wu Zhang.
Equidistribution of small points on abelian varieties.
Ann. of Math. (2) 147(1998), 159–165.
[Zh4]Zhang:adelic
Shou-Wu Zhang.
Small points and adelic metrics.
J. Algebraic Geom. 4(1995), 281–300.
|
http://arxiv.org/abs/1701.08147v2 | 20170127184827 | Absorption and scattering of a black hole with a global monopole in f(R) gravity | [
"M. A. Anacleto",
"F. A. Brito",
"S. J. S. Ferreira",
"E. Passos"
] | hep-th | [
"hep-th",
"gr-qc"
] |
anacleto@df.ufcg.edu.br
Departamento de Física, Universidade Federal de Campina Grande
Caixa Postal 10071, 58429-900 Campina Grande, Paraíba, Brazil
fabrito@df.ufcg.edu.br
Departamento de Física, Universidade Federal de Campina Grande
Caixa Postal 10071, 58429-900 Campina Grande, Paraíba, Brazil
Departamento de Física, Universidade Federal da Paraíba,
Caixa Postal 5008, 58051-970 João Pessoa, Paraíba, Brazil
stefanejudith@gmail.com
Departamento de Física, Universidade Federal de Campina Grande
Caixa Postal 10071, 58429-900 Campina Grande, Paraíba, Brazil
passos@df.ufcg.edu.br
Departamento de Física, Universidade Federal de Campina Grande
Caixa Postal 10071, 58429-900 Campina Grande, Paraíba, Brazil
In this paper we consider the solution of a black hole with a global monopole in f(R) gravity
and apply the partial wave approach to compute the differential scattering cross section and absorption cross section.
We show that in the low-frequency limit and at small angles the contribution to the dominant term in the scattering/absorption cross section is modified by the presence of the global monopole and the gravity modification. In such limit, the absorption cross section shows to be proportional to the area of the event horizon.
Absorption and scattering of a black hole with a global monopole in f(R) gravity
E. Passos
================================================================================
10000
§ INTRODUCTION
Black holes are fascinating objects that have remarkable characteristics and one of them is that they behave like thermodynamic systems possessing temperature and entropy.
Black holes are exact solutions of Einstein equations which are determined by mass (M), electric charge (Q)
and angular momentum (J) <cit.> and plays an important role in modern physics.
In particular, black hole with a global monopole has been explored extensively by many authors in recent years <cit.>
and the metric for this kind of black hole was determined by Barriola and Vilenkin <cit.>.
The global monopoles are topological defects that arise in gauge theories due to the spontaneous symmetry breaking
of the original global O(3) symmetry to U(1) <cit.>.
It is a type of defect that could be formed during phase transitions in the evolution of the early Universe.
From the cosmological point of view, the so-called f(R) theory of gravity introduces the possibility to explain the accelerated-inflation problem without the need to consider dark matter or dark energy <cit.>.
In <cit.> it has been investigated the classical motion of a massive test particle in the gravitational field of a
global monopole in f(R) gravity.
The authors in <cit.> have calculated, using the WKB approximation, the quasinormal modes for a black hole with a global monopole in f(R) theory of gravity.
The thermodynamics of the black holes with a global monopole in f(R) gravity was discussed in <cit.> and was treated analytically in <cit.> the case of trong gravitational lensing for a massive source with a global monopole in f(R) theory of gravity.
The main objective of this work is to compute the scattering cross section due to a black hole with a global monopole
in f(R) gravity theory. In <cit.> was analayzed the absorption problem for a massless scalar field propagating in general static spherically-symmetric black holes with a global monopole.
The geometry related to topological defects such as global monopoles is normally associated with a solid deficit angle characterized by some coupling parameter. As a Schwarzschild black hole swallow a global monopole its geometry is also affected and inherits a solid deficit angle. As a consequence the event horizon is modified and so do all the properties that follows from this phenomenon. Then it is natural to investigate how deep the differential scattering/absorption cross section of the black-hole global-monopole system is affected. Previous studies have pointed that the global monopole tends to increase these quantities for sufficiently large global monopole coupling parameter. In our present study we extend previous analysis in Einstein gravity to f(R) gravity. It is natural to investigate the interplay between the parameters from global monopole and f(R) gravity to uncover the ultimate physical consequences by analyzing the differential scattering/absorption cross section of the black-hole global-monopole system.
The study to understand the processes of absorption and scattering in the vicinity of black holes is one of the most important issues in theoretical physics and also of great relevance for experimental research.
We can explore the dynamics of a black hole by trying to disturb it away from its stationary configuration.
Thus, examining the interaction of fields with black holes is of great importance to understand aspects about formation, stability, and gravitational wave emission.
For many years, several theoretical works have been done to investigate the black hole scattering <cit.> (see also references therein).
Since 1970, many works have shown that at the long wavelength limit
(GMω≪ 1) <cit.>, the differential scattering cross section for small angles presents the following result: dσ/dΩ≈ 16G^2M^2/θ^4.
In addition, the calculation to obtain the low energy absorption cross section has been studied extensively in the literature <cit.>.
Thus, in this case the absorption cross section in the long-wavelength limit of a massless neutral scalar field is equal to the area of the horizon, σ=4π r^2_h=16π G^2 M^2 <cit.>.
On the other hand, for fermion fields one has been shown by Unruh <cit.> that the absorption cross section is 2π G^2M ^2 in the low-energy limit. The result is exactly 1/8 of that for the scalar wave in the low-energy limit.
An extension of the calculation of the absorption cross section for acoustic waves was performed in <cit.>, <cit.> and <cit.>.
The partial wave approach has also been extended to investigate the scattering by an acoustic black hole in (2 + 1) dimensions <cit.> and also due to a non-commutative BTZ black hole <cit.>.
Also some studies have been carried out on the processes of absorption and scattering of massive fields by black holes <cit.>. For computations of scattering by spherically symmetric d-dimensional black holes in string theory see, e.g., <cit.>.
In this paper, inspired by all of these previous works and adopting the technique developed by the authors in <cit.>, we shall focus on the computation of the scattering and absorption cross section for a monochromatic planar wave of neutral massless scalar field impinging upon a black hole with a global monopole in f(R) gravity.
In this scenario there are four parameters: the mass M of the black hole, the frequency ω of the field, the monopole parameter η and ψ_0 associated with the corrections from the f(R) gravity.
Thus, we have three dimensionless parameters: GMω, 8π Gη^2≈ 10^-5 and a=ω/ψ_0.
In our analyzes, we will consider only the long-wavelength regime, in which GMω≪ 1.
Dolan et al. <cit.>, studied the analogous Aharonov-Bohm effect considering the scattering of planar waves by a
draining bathtub vortex. They implemented an approximation formula to calculate the phase shift δ_l≈ (m-m̃) analytically,
where m̃ was defined by considering only the contributions of m and ω (frequency) appearing
in the 1/r ^ 2 term modified after the power series expansion of 1/r.
In an analogous way we introduce the following approximation: δ_l≈ (l-ℓ),
where ℓ is defined by considering only the contributions of l and ω appearing
in the 1/r ^ 2 term modified after the power series expansion of 1/r.
Then, we have verified that the presence of the parameters η and ψ_0 modify the dominant term of the differential scattering cross section in the low-frequency limit at small angles and also the absorption cross section.
We initially analyzed the example of the black hole with a global monopole and showed that the contribution to the dominant term of the differential cross section is increased due to the monopole effect as well as to the absorption.
On the other hand, considering the case of a black hole with a global monopole in f(R) theory, we find that in the low-frequency limit the contribution to the dominant term of the differential scattering cross section and for the absorption cross section is also increased due to the effect of the f(R) theory.
Here we adopt the natural units ħ=c=k_B=1.
§ SCATTERING/ABSORPTION CROSS SECTION
In this section we are interested in determining the differential scattering cross section for a black hole with a global monopole in f (R) gravity by the partial wave method in the low frequency regime. For this purpose we will follow the procedure adopted in previous works to calculate the phase shift.
§.§ The global monopole in Einstein gravity
Initially, we will consider a spherically symmetric line element of a black hole with a global monopole that is given by
ds^2=A(r)dt^2-dr^2/A(r)-r^2dΩ^2,
where
A(r)=1-8π Gη^2 -2GMr.
Here, G is the Newton constant,
η is the monopole parameter of the order 10^16GeV and
so 8π Gη^2≈ 10^-5 <cit.>.
The event horizon radius is obtained by A(r)=0, i.e.
r_η=2GM/(1-8π Gη^2)=r_s/(1-8π Gη^2),
where r_s=2GM is the event horizon of the Schwarzschild black hole.
The Hawking temperature of the black hole is
T_H=1/4π(1-8π Gη^2/r_s).
For η=0 the Hawking temperature of the Schwarzschild black hole is recovered.
The next step is to consider the Klein-Gordon wave equation for a massless scalar field in the background (<ref>)
1√(-g)∂_μ(√(-g)g^μν∂_νΦ)=0 .
Now we can make a separation of variables into the equation above as follows
Φ_ω l m( r,t)=R_ω l(r)/rY_lm(θ,ϕ)e^-iω t,
where ω is the frequency and Y_lm(θ,ϕ) are the spherical harmonics.
In this case, the equation for R_ω l(r) can be written as
A(r)ddr(A(r)dR_ω l(r)dr) +[ ω^2 -V_eff]R_ω l(r)=0,
and
V_eff=A(r)/rdA(r)/dr+A(r)l(l+1)/r^2,
is the effective potential.
At this point, we consider a new radial function, ψ(r)=A^1/2(r)R(r), so we have
d^2ψ(r)dr^2+U(r) ψ(r) = 0,
where
U(r)=[A'(r)]^24 A^2(r) - A”(r)2A(r) + ω^2A^2(r) - V_effA^2(r),
and
A'(r)=dA(r)/dr =2GMr^2 , A”(r)=d^2A(r)/dr^2= - 4GMr^3 .
Now performing a power series in 1/r the Eq. (<ref>) becomes
d^2ψ(r)/dr^2+[ω̃^2+ V(r)+ U(r)] ψ(r) = 0,
where now we have
V(r)= 4GMω̃^2/(1-8π Gη^2)r+12ℓ^2/r^2,
and
U(r) = 32 G^3 M^3ω̃^3-2 ( l^2+l)G Mω̃(1-8π Gη^2)- 16π Gη^2GMω̃/ω̃(1-8π Gη^2)^3r^3
+ 1ω̃^2(1-8π Gη^2)^4r^4[80 G^4 M^4ω̃^4+G^2 M^2ω^2-4(l^2+l)G^2 M^2ω^2(1-8π Gη^2)
- (1-8π Gη^2)[8-5(1-8π Gη^2) ]G^2 M^2ω̃^2]+⋯,
with ω̃=ω/(1-8π Gη^2) and we define
ℓ^2≡-(l^2+l)/12(1-8π Gη^2)+G^2M^2ω̃^2/(1-8π Gη^2)^2.
Here ℓ^2 was defined as the change of the coefficient of 1 / r ^ 2 (containing only the contributions involving the quantities of l and ω) that arises after the realization of the power series in 1/r in Eq. (<ref>).
Notice that when r →∞ the potential V(r)= V (r)+ U(r) → 0 and the asymptotic behavior is satisfied.
Thus, knowing the phase shifts the scattering amplitude can be obtained and which has the following partial-wave representation
f(θ)=1/2iω∑_l=0^∞(2l+1)(e^2iδ_l -1 )P_lcosθ,
and the differential scattering cross section can be computed by the formula
dσdθ=|f(θ) |^2.
The phase shift δ_l can be obtained applying the folowing approximation formula
δ_l≈1/2(l-ℓ)=1/2(l-√(-(l^2+l)/12(1-8π Gη^2)+G^2M^2ω̃^2/(1-8π Gη^2)^2)).
In the limit l→ 0 we obtain
δ_l=-GMω̃/2(1-8π Gη^2)+ O(l)=-GMω/2(1-8π Gη^2)^2+ O(l).
Note that in the limit l→ 0 the phase shifts tend to non-zero term,
which naturally leads to a correct result for the differential cross section at the small angles limit.
Another way of obtaining the same phase shift is through the Born approximation formula
δ_l≈ω/2∫_0^∞r^2J_l^2(ω r) U(r)dr,
where J_l(x) are the spherical Bessel functions of the first
kind and U(r) is the effective potential of Eq. (<ref>).
After performing the integration we take the limits of ω→ 0 and l→ 0.
So the result is the same as Eq. (<ref>).
The Eq. (<ref>) is poorly convergent, so it is very difficult to perform the sum of the series directly. This is due to the fact that an infinite number of Legendre polynomials are required to obtain divergences in θ=0.
In <cit.>, it has been found by the authors a way to around this problem.
It has been proposed by them a reduced series which is less divergent in θ=0, i.e.
(1-cosθ)^mf(θ)=∑_l=0a_l^mP_lcosθ,
and so it is expected that the reduced series can converge more quickly.
Therefore, to determine the differential scattering cross section, we will use the following equation <cit.>
dσdθ=| 1/2iω̃∑_l=0^1(2l+1)(e^2iδ_l -1 )
P_lcosθ/1-cosθ|^2.
However, considering few values of l (l=0,1) is sufficient to obtain the result satisfactorily.
Hence the differential scattering cross section is in this case given by
dσ/dθ |^l f_ω→ 0=16G^2M^2/( 1-8π Gη^2)^2θ^4+⋯
=16G^2M^2/θ^4[ 1+16π Gη^2+ O(Gη^2)^2] +⋯ .
The dominant term is modified by monopole parameter η.
Thus, we verified that the differential cross section is increased by the monopole effect.
As η=0 we obtain the result for the Schwarzschild black hole case.
Now we will determine the absorption cross section for a black hole with a global monopole in the low-frequency limit.
As is well known in quantum mechanics, the total absorption cross section can be computed by means of the following relation
σ_abs
=π/ω^2∑_l=0^∞(2l+1)(|1-e^2iδ_l|^2).
For the phase shift δ_l of the Eq. (<ref>), we obtain in the limit ω→ 0:
σ_abs^l f = π/ω̃^2∑_l=0^3(2l+1)(|1-e^2iδ_l|^2),
= 16π G^2M^2/( 1-8π Gη^2)^2= A_Sch/( 1-8π Gη^2)^2,
where A_Sch =4π r^2_s is the area of the event horizon of the Schwarzschild black hole.
So for a few values of l (l=0,1,2,3) the result is successfully obtained.
Here we note that the absorption is increased due to the contribution of the monopole.
In <cit.> the absorption cross section of a massless scalar wave due to a black hole with a global monopole has been computed. The authors have shown that the effect of the parameter η makes the black hole absorption stronger.
Our result is in accordance with the one obtained in <cit.>.
Furthermore, our results for absorption show concordance with the universality property of the absorption cross section which is always proportional to the area of the event horizon at low-frequency limit <cit.>.
In addition, in Fig. <ref> we show the graph for the mode l=0 of the absorption cross section that was obtained by numerically solving the radial equation (<ref>) for arbitrary frequencies.
§.§ The global monopole in f(R) gravity
We will now compute the differential scattering cross section of a black hole with a global monopole in the
f(R) gravity. The spherical symmetric line element is given as follow <cit.>
ds^2=B(r)dt^2-dr^2/B(r)-r^2dΩ^2,
where
B(r)=A(r)-ψ_0 r, A(r)=1-8π Gη^2 -2GMr.
The term ψ_0 r corresponds to the extension of the standard general relativity.
For metric (<ref>), when B(r)=0 we obtain the following internal and external event horizons, respectively
r_-=1-8π Gη^2-√((1-8π Gη^2)^2-8GMψ_0)/2ψ_0,
and
r_+=1-8π Gη^2+√((1-8π Gη^2)^2-8GMψ_0)/2ψ_0.
Adding r_- and r_+ we also find the following relationship between horizons
r_++r_-=1/ψ_0(1-8π Gη^2).
Notice that the horizon r_+ exists only if ψ_0 is nonzero.
Considering that ψ_0 is small and expanding the square root term in Eq. (<ref>) we obtain
r_η=2GM/1-8π Gη^2+4G^2M^2ψ_0+⋯=r_s/1-8π Gη^2+r_s^2ψ_0+⋯,
which for ψ_0=0 is exactly the result previously obtained in Eq. (<ref>) and when η=ψ_0=0 we have r_η=r_s,
that is the event horizon of the Schwarzschild black hole.
For Eq. (<ref>), considering ψ_0 very small, we find
r_ψ_0 = 1/ψ_0(1-8π Gη^2)-r_s/(1-8π Gη^2)-r_s^2ψ_0+⋯,
= 1/ψ_0(1-8π Gη^2)-r_η+⋯, ⇒
r_ψ+r_η=1/ψ_0(1-8π Gη^2).
Observe that, at the limit of ψ_0→ 0, the effect of the theory f(R) has a dominant contribution only in the
external horizon r_+.
The Hawking temperature associated with the black hole of Eq. (<ref>) is
T_H=1/4π(1-8π Gη^2/r_h -2ψ_0 )
=1/4π(1-8π Gη^2)r_+-r_-/r_-(r_++r_-),
where in the last step of Eq. (<ref>) we have used Eq. (<ref>).
If η=0 and ψ_0=0 (in the absence of monopoles and f(R) corrections) the Hawking temperature will be reduced to that of the Schwarzschild case, as expected.
However, we also note in (<ref>) that for T_H≥0, it is necessary that r_+≥ r_-, when ψ_0 is turned on. This implies that the external horizon, r_+, should be larger than or equal the internal horizon r_-.
The temperature is zero when one saturates the lower bound, i.e., when r_+=r_-. This is an effect similar to what happens with Reissner-Nordström black holes
Following the same steps applied to the previous case, Eq. (<ref>) can be now written as
d^2ψ(r)/dr^2+[(ℓ^2 +1/4)/r^2+ U(r)] ψ(r) = 0,
being
U(r) = a(1 - 8 π G η^2)/2 + a (l^2+l + 1)+2(1 - 8 π G η^2) a^3/ω r^3
+ 1/ω^2r^4[ -4 G Mω (a^3+a) +(3 - 48 π G η^2)a^4
+ 3a^2/4 -8 π G η^2 a^2 + a^2 l (l + 1) (1 - 8 π G η^2)
+ 2a^2(1 - 8 π G η^2)]+⋯ ,
where we have defined ℓ^2≡ a^2 and a≡ω/ψ_0.
Analyzing the coefficient of 1 / r ^ 2 in Eq. (<ref>), only the first term contains the frequency ω and the second term is just a numerical factor. Thus, as already mentioned in Eq. (<ref>), only the first term enters in the definition of ℓ ^ 2.
Note that the potential V(r)=(ℓ^2 +1/4)/r^2+U(r) obeys the asymptotic limit V(r) → 0 as r →∞.
Next following the same approximation used in the formula (<ref>) the phase shift δ_l in the limit l→ 0 reads
δ_l=-ω/2ψ_0+ O(l)
=-ω̅(r_ψ+r_η)/4 + O(l),
where ω̅ =ω/2( 1-8π Gη^2) and we have used the result of Eq. (<ref>) to express the phase shift in terms of r_ψ and r_η. Also in this case the phase change tends to a non-zero constant term in the limit l→ 0.
Once again we can verify that the phase shift could have been obtained from the Born approximation formula
δ_l≈ω/2∫_0^∞r^2J_l^2(ω r)U(r)dr,
Thus, in the low-frequency (long-wavelength) limit and at the small angle θ, the differential scattering cross section is given by
dσdθ |^l f_ω→ 0 = | 1/2iω̅∑_l=0^1(2l+1)(e^2iδ_l -1 )P_lcosθ/1-cosθ|^2,
= 4 (r_ψ_0+r_η)^2/θ^4+⋯.
In the limit ψ_0→ 0 we have r_ψ_0≫ r_η and the differential scattering cross section becomes
dσdθ |^l f_ω→ 0≈4 r_ψ^2/θ^4+⋯
=4( 1-8π Gη^2)^2ψ^2_0θ^4+⋯.
We see that the presence of the parameters ψ_0 modifies the dominant term.
Now we will determine the absorption cross section for a black hole with a global monopole in f (R)
gravity in the low-frequency limit. So for the phase shift δ_l (<ref>) and applying the limit ω→ 0 we find
σ_abs^l f = π/ω̅^2∑_l=0^3(2l+1)(|1-e^2iδ_l|^2),
= 4π( r_ψ_0+r_η)^2.
In the limit ψ_0→ 0 we have r_ψ_0≫ r_η and the absorption cross section becomes
σ_abs^l f≈4π( 1-8π Gη^2)^2/ψ_0^2= 4π r_ψ_0^2
=A_ψ_0,
which is dominated by the effect of the f(R) gravity.
Thus, in this case the black hole with global monopole in f(R) gravity absorbs more for large external horizon.
Finally, in the regime r_ψ_0=0 and η=0 from Eq. (<ref>) we obtain the result for the case of the Schwarzschild black hole σ_abs^l f=4π r_s^2=A_Sch.
It is worth mentioning that considering the metric B(r) in (<ref>), we note that in the limit when r→∞ we have B(r)→∞. Thus, by analyzing the third term of the potential U(r) of Eq. (<ref>) we find that this term tends to zero when r goes to infinity. This is reflected in the absence of the frequency ω^2 and the mass M in the term 1 / r ^ 2 of Eq. (<ref>) when we perform a series expansion of potential in 1 / r. And as a consequence we do not have the presence of mass M in the result of absorption. In order to avoid this and also to be in accordance with the numerical result we will consider the Eq. (<ref>) after a variable change (1/r → 1+1/r) written as follows:
d^2ψ(r)/dr^2+U(r) ψ(r) = 0,
where
U(r)=ω^2/C^2(r)+1/C^2(r)([C'(r)]^2/4 - C”(r)C(r)/2 -C(r)C'(r)/r-C(r)l(l+1)/r^2),
being
C(r)=1-8π Gη^2-2GM+λ/r-ψ_0/r(1+r).
Notice that when r→∞ the term ω^2/C^2(r) tends to ω^2 /(1-8πη^2)^2.
Now writing the potential U(r) as a power series in 1/r we have
d^2ψ(r)/dr^2+[ω̃^2+ U(r)] ψ(r) = 0,
where ω̃=ω/(1-8π Gη^2) and
U(r)=ω̃^2(4GM+ψ_0)/(1 - 8 π Gη^2) r
+12ℓ^2/r^2+⋯,
here we have defined
ℓ^2≡-(l^2+l)/12(1-8π Gη^2)+[G^2M^2+GMψ_0+ψ_0(1-8π Gη^2)/6+ψ_0^2/8)]ω̃^2/(1-8π Gη^2)^2.
Now we compute the phase shift through the approximation formula (<ref>) in the limit l→ 0 and considering ψ_0 very small, which is given by
δ_l=-GMω̃/2(1-8π Gη^2)[1+ψ_0(G M+(1-8π Gη^2)/6 )/2G^2 M^2]+ O(l).
Applying the formula (<ref>) we can obtain the following result for the differential scattering cross section
dσ/dθ |^l f_ω→ 0=16G^2M^2/( 1-8π Gη^2)^2θ^4[1+ψ_0(G M+(1-8π Gη^2)/6 )/2G^2 M^2]^2+⋯.
Note that the dominant term is modified by the parameters η and ψ_0.
In the low-frequency limit the absorption cross section reads
σ_abs^l f
=16π G^2M^2/[ 1-8π Gη^2)^2(1+ψ_0(G M+(1-8π Gη^2)/6 )/2 G^2 M^2]^2.
For η=0, we can verify that when we increase the value of ψ_0,
the absorption has its value increased due to the effect of gravity f (R).
And when ψ_0=0, we recover the result of Eq. (<ref>). This can be best understood by looking at the graph of Fig. <ref>
for the mode l=0 which was obtained by numerically solving the radial equation (<ref>) (with A(r) → B(r)) for arbitrary frequencies .
At this point we present the numerical results of the partial absorption cross section as a function of arbitrary frequencies obtained through the numerical procedure as described in <cit.>. The graphs are shown below.
In Fig. <ref>(a), we plot the partial absorption cross section for the l=0 mode with η=0.000, 0.100, 0.150 and 0.200.
We can see by comparing the curves for different values of η that the absorption is increased due to the contribution of the monopole.
Moreover, when Mω→ 0 the abostion tends to a nonzero value and when Mω increases it tends to zero.
For η =0 the graph shows the result of the partial absorption for the Schwarzschild black hole.
Thus for non-zero values of η, the partial absorption for the black hole with global monopole is increased in relation to the Schwarzschild black hole. Our result is in agreement with the one obtained in <cit.>, for instance.
The effect of f (R) gravity for the partial absorption cross section for the l=0 mode can be seen in Fig. <ref>(b).
Note that considering the effect of f (R) gravity the absorption is still increased in relation to the Schwarzschild black hole case.
By comparing the amplitudes of the graphs of Fig. <ref>, it is noted that the maximum amplitude of Fig. <ref>(a) has a width narrower than of Fig. <ref>(b).
Now considering the contributions of both the global monopole and the f(R) gravity the graph <ref> shows a shift of the upward curve greater than in the previous curve.
In Fig. <ref> we plot the contribution of partial absorption to the modes l = 0,1,2.
Note that for the modes l=1 and l=2 the partial absorption starts from zero and reaches a maximum value and then decreases with the increase of the energy Mω.
We can see that by increasing the value of l the corresponding maximum value of the partial absorption decreases.
Therefore, our results are in accord with those obtained by the authors in <cit.> and <cit.>, for instance.
Furthermore, by analyzing the curves of Fig. <ref> we observe that as we increase the values of ψ_0 the amplitude is increased and this increase is greater for the l = 0 mode.
§ CONCLUSIONS
In summary, in the present study we calculate the absorption and scattering cross section of a black hole with a global monopole in f(R) gravity in the low-frequency limit at small angles (θ≈ 0).
To determine the phase shift analytically we have implemented the approximation formula δ_l≈ (l-ℓ) and so
we have found, adopting the partial wave approach, that the scattering cross section is still dominated at the small-angled limit by 1/θ^4. This dominant term is modified by the presence of the parameters η and ψ_0.
Initially the case of a black hole with a global monopole was analyzed and we showed that the result for the differential scattering cross section as well as the absorption cross section is increased due to the monopole effect. Moreover, considering the case of a black hole with a global monopole in f(R) gravity, we find that in the low-frequency limit the contribution to the dominant term of the differential scattering cross section/absorption cross section is also increased due to the effect of the f(R) gravity.
Finally, we solve numerically the radial equation in order to calculate the partial absorption cross section for arbitrary frequencies.
As a result we have shown that the absorption has its value increased as we increase the value of the parameter ψ_0.
We would like to thank CNPq and CAPES for partial financial support.
100
Frolov A.V. Frolov, K.R. Kristjansson, L. Thorlacius et al, Phys. Rev. D 72, 021501 (2005),
[hep-th/0504073];
Townsend1997 P. K. Townsend, Black holes: Lecture notes, (University of Cambridge, Cambridge, 1997) [gr-qc/9707012]; T. Padmanabhan, Phys. Rep. 406, 49 (2005), [gr-qc/0311036].
BezerradeMello:1996si
E. R. Bezerra de Mello and C. Furtado,
Phys. Rev. D 56, 1345 (1997).
doi:10.1103/PhysRevD.56.1345
Yu2002 H. Yu, Phys. Rev. D 65, 087502 (2002) .
Paulo2009 J. Paulo, M. Pitelli and P. Letelier, Phys. Rev. D 80, 104035 (2009) .
Chen2008 S. Chen and J. Jing, Mod. Phys. Lett. A 23, 359 (2008).
Rahaman2005 F. Rahaman, P. Ghosh, M. Kalam and K. Gayen, Mod. Phys. Lett. A 20, 1627 (2005).
Barriola1989 M. Barriola and A. Vilenkin, Phys. Rev. Lett. 63, 341 (1989).
Kibble1976 T. W. B. Kibble, J. Phys. A 9, 1387 (1976).
Vilenkin1988 A. Vilenkin, Phys. Rep. 121, 263 (1985).
Nojiri S. Nojiri, S. D. Odintsov, Phys. Rev. D 68, 123512 (2003) .
Carrol2004 S. M. Carrol, V. Duvvuri, M. Trodden, M. S. Turner, Phys. Rev. D 70, 043528 (2004).
Fay2007 S. Fay, R. Tavakol, S. Tsujikawa, Phys. Rev. D 75, 063509 (2007).
Bazeia:2007jj
D. Bazeia, B. Carneiro da Cunha, R. Menezes and A. Y. Petrov,
Phys. Lett. B 649, 445 (2007)
doi:10.1016/j.physletb.2007.04.040
[hep-th/0701106].
Carames2011 T. R. P. Carames, E. R. B. de Mello, M. E. X. Guimaraes, Int. J. Mod. Phys. Conf. Ser. 03, 446 (2011);
T. R. P. Carames, E. R. B. de Mello, M. E. X. Guimaraes, Mod. Phys. Lett. A 27, 1250177 (2012).
Graca:2015jea
J. P. Morais Graça, H. S. Vieira and V. B. Bezerra,
Gen. Rel. Grav. 48, no. 4, 38 (2016)
doi:10.1007/s10714-016-2024-7
[arXiv:1510.07184 [gr-qc]];
J. P. Morais Graca and V. B. Bezerra,
Mod. Phys. Lett. A 27, 1250178 (2012).
doi:10.1142/S0217732312501787;
V. B. Bezerra and N. R. Khusnutdinov,
Class. Quant. Grav. 19, 3127 (2002)
doi:10.1088/0264-9381/19/12/302
[gr-qc/0204056].
Man2013 J. Man, H. Cheng, Phys. Rev. D 87, 044002 (2013).
Lustosa:2015hwa
F. B. Lustosa, M. E. X. Guimarães, C. N. Ferreira and J. L. Neto,
arXiv:1510.08176 [hep-th].
Man2015 J. Man, H. Cheng, Phys. Rev. D 92, 024004 (2015).
Hai:2013ara
H. Hai, W. Yong-Jiu and C. Ju-Hua,
Chin. Phys. B 22, no. 7, 070401 (2013).
Futterman1988 J. A. Futterman, F. A. Handler, and R. A. Matzner,
Scattering from black holes (Cambridge University Press, England, 1988)
Matzner1977 R. A. Matzner and M. P. Ryan, Phys. Rev. D 16, 1636 (1977).
Westervelt1971 P. J. Westervelt, Phys. Rev. D 3, 2319 (1971).
Peters1976 P. C. Peters, Phys. Rev. D 13, 775 (1976).
Sanchez1976 N. G. Sánchez, J. Math. Phys. 17, 688 (1976);
N. G. Sánchez, Phys. Rev. D 16 , 937 (1977);
N. G. Sánchez, Phys. Rev. D 18, 1030 (1978);
N. G. Sánchez, Rev. D 18, 1798 (1978).
Logi1977 W. K. de Logi and S. J. Kovács, Phys. Rev. D 16, 237 (1977).
Doram2002 C. J. L. Doran and A. N. Lasenby, Phys. Rev. D 66, 024006 (2002).
Dolan:2007ut
S. R. Dolan,
Phys. Rev. D 77, 044004 (2008)
doi:10.1103/PhysRevD.77.044004
[arXiv:0710.4252 [gr-qc]].
Crispino:2009ki
L. C. B. Crispino, S. R. Dolan and E. S. Oliveira,
Phys. Rev. D 79, 064022 (2009)
doi:10.1103/PhysRevD.79.064022
[arXiv:0904.0999 [gr-qc]].
Churilov1974 A. A. Starobinsky and S. M. Churilov, Sov. Phys.- JETP 38, 1 (1974).
Gibbons1975 G. W. Gibbons Commun. Math. Phys. 44, 245 (1975)
Page1976 D. N. Page, Phys. Rev. D 13, 198 (1976)
Unruh1976 W. G. Unruh, Phys. Rev. D 14, 3251 (1976)
Churilov1973 A. A. Starobinskii and S. M. Churilov, Zh. Eksp. Teor. Fiz. 65, 3 (1973).
Crispino:2007zz
L. C. B. Crispino, E. S. Oliveira and G. E. A. Matsas,
Phys. Rev. D 76, 107502 (2007).
Dolan:2009zza
S. R. Dolan, E. S. Oliveira and L. C. B. Crispino,
Phys. Rev. D 79, 064014 (2009)
Oliveira:2010zzb
E. S. Oliveira, S. R. Dolan and L. C. B. Crispino,
Phys. Rev. D 81, 124013 (2010).
DolanS. R. Dolan, E. S. Oliveira, L. C. B. Crispino, Phys. Lett. B 701, 485 (2011).
ABP2012-1 M. A. Anacleto, F. A. Brito and E. Passos, Phys. Rev. D 86, 125015 (2012)
[arXiv:1208.2615 [hep-th]]; Phys. Rev. D 87, 125015 (2013) [arXiv:1210.7739 [hep-th]].
Anacleto:2015mta
M. A. Anacleto, I. G. Salako, F. A. Brito and E. Passos,
Phys. Rev. D 92, no. 12, 125010 (2015)
doi:10.1103/PhysRevD.92.125010
[arXiv:1506.03440 [hep-th]];
M. A. Anacleto, F. A. Brito, A. Mohammadi and E. Passos,
arXiv:1606.09231 [hep-th].
Brito2015 M. A. Anacleto, F. A. Brito and E. Passos,
Phys. Lett. B 743, 184 (2015)
[arXiv:1408.4481 [hep-th]].
Jung2004 E. Jung and D. Park, Class. Quantum Grav. 21, 3717 (2004), arXiv:hep-th/0403251 [hep-th];
E. Jung, S. Kim, and D. Park, Phys. Lett. B 602, 105 (2004), arXiv:hep-th/0409145 [hep-th].
Doran2005 C. Doran, A. Lasenby, S. Dolan, and I. Hinder, Phys. Rev. D 71, 124020 (2005),
arXiv:gr-qc/0503019 [gr-qc].
Dolanprd2006 S. Dolan, C. Doran, and A. Lasenby, Phys. Rev. D 74, 064005 (2006), arXiv:gr-qc/0605031 [gr-qc].
Castineiras2007 J. Castineiras, L. C. Crispino, and D. P. M. Filho, Phys. Rev. D 75, 024012 (2007).
Benone:2014qaa
C. L. Benone, E. S. de Oliveira, S. R. Dolan and L. C. B. Crispino,
Phys. Rev. D 89, no. 10, 104053 (2014)
doi:10.1103/PhysRevD.89.104053
[arXiv:1404.0687 [gr-qc]].
Moura:2011rr
F. Moura,
JHEP 1309, 038 (2013)
doi:10.1007/JHEP09(2013)038
[arXiv:1105.5074 [hep-th]].
Marinho:2016ixt
C. I. S. Marinho and E. S. de Oliveira,
arXiv:1612.05604 [gr-qc].
Vilenkin1985 A. Vilenkin, Phys. Rep. 121, 263 (1985).
Chen:2016ftz
L. Chen and H. Cheng,
Gen. Rel. Grav. 50, no. 3, 26 (2018) arXiv:1607.07138 [hep-th].
Dolan:2012yc
S. R. Dolan and E. S. Oliveira,
Phys. Rev. D 87, no. 12, 124038 (2013)
[arXiv:1211.3751 [gr-qc]].
Yennie1954 D. R. Yennie, D. G. Ravenhall, and R. N. Wilson, Phys. Rev. 95, 500 (1954).
Cotaescu:2014jca
I. I. Cotaescu, C. Crucean and C. A. Sporea,
Eur. Phys. J. C 76, no. 3, 102 (2016)
doi:10.1140/epjc/s10052-016-3936-9
[arXiv:1409.7201 [gr-qc]].
Das:1996we
S. R. Das, G. W. Gibbons and S. D. Mathur,
Phys. Rev. Lett. 78, 417 (1997)
doi:10.1103/PhysRevLett.78.417
[hep-th/9609052].
LPing2015 Liao Ping, Zhang Ruan-Jing, Chen Ju-Hua and Wang Yong-Jiu, Chin. Phys. Lett. 32, No.5, 050401 (2015).
|
http://arxiv.org/abs/1701.07625v2 | 20170126093027 | Efficient-robust routing for single commodity network flows | [
"Yongxin Chen",
"Tryphon T. Georgiou",
"Michele Pavon",
"Allen Tannenbaum"
] | math-ph | [
"math-ph",
"math.MP"
] |
Efficient-robust routing for single commodity network flowsThis project was supported by AFOSR grants (FA9550-15-1-0045 and FA9550-17-1-0435), grants from the National Center for Research Resources (P41-RR-013218) and the National Institute of Biomedical Imaging and Bioengineering (P41-EB-015902), National Science Foundation (ECCS-1509387), by the University of Padova Research Project CPDA 140897 and a postdoctoral fellowship through Memorial Sloan Kettering Cancer Center.
Yongxin Chen, Tryphon T. Georgiou, Fellow IEEE, Michele Pavon, and Allen Tannenbaum, Fellow IEEE
Y. Chen is with the Department of Medical Physics, Memorial Sloan Kettering Cancer Center, NY; email: chen2468@umn.edu
T. T. Georgiou is with the Department of Mechanical and Aerospace Engineering, University of California, Irvine, CA; email: tryphon@uci.edu
M. Pavon is with the Dipartimento di Matematica “Tullio Levi Civita",
Università di Padova, 35121 Padova, Italy; email: pavon@math.unipd.it
A. Tannenbaum is with the Departments of Computer Science and Applied Mathematics & Statistics, Stony Brook University, NY; email: allen.tannenbaum@stonybrook.edu
======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
1
We study single commodity network flows with suitable robustness and efficiency specs. An original use of a maximum entropy problem for distributions on the paths of the graph turns this problem into a steering problem for Markov chains with prescribed initial and final marginals.
From a computational standpoint, viewing scheduling this way
is especially attractive in light of the existence of an iterative algorithm to compute the solution.
The present paper builds on <cit.> by introducing an index of efficiency of a transportation plan and points, accordingly, to efficient-robust transport policies. In developing the theory, we establish two new invariance properties of the solution (called bridge)
– an iterated bridge invariance property and the invariance of the most probable paths. These properties, which were tangentially mentioned in our previous work,
are fully developed here. We also show that the distribution on paths of the optimal transport policy, which depends on a “temperature” parameter,
tends to the solution of the “most economical” but possibly less robust optimal mass transport problem as the temperature goes to zero. The relevance of all of these properties for transport over networks is illustrated in an example.
.9
Index Terms— Transport over networks, maximum entropy problem, most probable path, temperature parameter
.97
§ INTRODUCTION
Consider a company owning a factory F and a warehouse W. The company wants to ship a certain quantity of goods from F so that they reach W in at most N time units. The flow must occur on the available road network connecting the two facilities. On the one hand, it is desirable that the transport plan utilizes as many different routes as possible so that most of the goods arrive within the prescribed time horizon even in the presence of road congestion, roadwork, etc. On the other hand, it is also important that shorter paths are used to keep the vehicles fuel consumption within a budgetary constraint.
In this paper, continuing the research initiated in <cit.>, we provide a precise mathematical formulation of the above single commodity network flow problem. Normalizing the mass of goods to one, we formulate a maximum entropy problem for Markovian distributions on the paths of the network. The optimal feedback control suitably modifies a prior transition mechanism thereby achieving robustness while limiting the cost. This is accomplished through an appropriate choice of the prior transition involving the adjacency matrix of the graph. The optimal scheduling, while spreading the mass over all feasible paths, assigns maximum probability to all minimum cost paths.
Differently from the standard literature on controlled Markov chains, the optimal policy (Schrödinger bridge) is not computed through dynamic programming. The constraint on the final marginal (all the goods should be in the warehouse by day N) dictates a different approach. The solution is computed by solving iteratively a Schrödinger-Bernstein linear system with nonlinear coupling at the initial and final times. This algorithm, whose convergence was established in <cit.>, is related to recent work in entropy regularization <cit.> and equilibrium assignement in economics <cit.> as well as to classical work in statistics <cit.>.
Our straightforward approach also avoids altogether modelling cascading failures which is a complex and controversial task <cit.>. It is also worthwhile remarking that maximum entropy problems <cit.>
which constitute a powerful inference method,
find here an alternative use as a tool to produce a desired flow in a network by exploiting the properties of the prior transition mechanism.
Our intuitive notion of robustness of the routing policy should not be confused with other notions of robustness concerning networks which have been put forward and studied, see e.g. <cit.>.
In particular, in <cit.>, robustness has been defined through a fluctuation-dissipation relation involving the entropy rate.
This latter notion captures relaxation of a process back to equilibrium after a perturbation and has been used to study both financial and biological networks <cit.>. Our study, inspired by transportation and data networks, does not concern equilibrium or near equilibrium cases.
This paper features the following novel contributions: a) it introduces an explicit index of efficiency of a transportation plan; b) The choice of the adjacency matrix as prior transition mechanism, which was justified in <cit.> on an intuitive basis, is here further motivated trough a specific optimization problem; c) we derive an iterated bridge invariance property; d) we establish the invariance of the most probable paths. These two invariance properties, which were only briefly mentioned in <cit.> in some special cases, are here fully investigated. Their relevance for transport over networks is also illustrated. e) we study the dependence of the optimal transport on a temperature parameter. The possibility of employing the solution for near-zero temperature as an approximation of the solution to Optimal Mass Transport (OMT) is also discussed and illustrated through examples.
The outline of the
paper is as follows. In Section <ref> we introduce
generalized maximum entropy problems
In Section <ref> we establish the iterated bridge property, and in Section <ref> the invariance of the most probable paths.
Efficiency of a transport policy is introduced in Section <ref>. In Section <ref>, we introduce robust transport with fixed average path length. Section <ref> deals with efficient-robust transportation. In Section <ref>, the dependence of the optimal transport on the temperature parameter is
investigated. The results are then illustrated through academic examples in Section <ref>.
§ GENERALIZED MAXIMUM ENTROPY PROBLEMS
We are given a directed, strongly connected (i.e., with a path in each direction between each pair of vertices), aperiodic graph G=(𝒳,ℰ) with vertex set 𝒳={1,2,…,n} and edge set ℰ⊆𝒳×𝒳. We let time vary in 𝒯={0,1,…,N}, and let ℱ𝒫_0^N⊆𝒳^N+1 denote the family of length N, feasible paths x=(x_0,…,x_N), namely paths such that x_ix_i+1∈ℰ for i=0,1,…,N-1.
We seek a probability distribution on ℱ𝒫_0^N with prescribed initial and final marginal probability distributions ν_0(·) and ν_N(·), respectively, and such that the resulting random evolution
is closest to a “prior” measure on ℱ𝒫_0^N in a suitable sense.
The prior law is induced by the Markovian evolution
μ_t+1(x_t+1)=∑_x_t∈𝒳μ_t(x_t) m_x_tx_t+1(t)
with nonnegative distributions μ_t(·) over 𝒳, t∈𝒯, and weights m_ij(t)≥ 0 for all indices i,j∈𝒳 and all times. Moreover, to respect the topology of the graph, m_ij(t)=0 for all t whenever ij∉ℰ. Often, but not always, the matrix
M(t)=[ m_ij(t)]_i,j=1^n
does not depend on t.
The rows of the transition matrix M(t) do not necessarily sum up to one, so that the “total transported mass” is not necessarily preserved. It occurs, for instance, when M simply encodes the topological structure of the network with m_ij being zero or one, depending on whether a certain link exists.
The evolution (<ref>) together with the measure μ_0(·), which we assume positive on 𝒳, i.e.,
μ_0(x)>0x∈𝒳,
induces
a measure on ℱ𝒫_0^N as follows. It assigns to a path x=(x_0,x_1,…,x_N)∈ℱ𝒫_0^N the value
(x_0,x_1,…,x_N)=μ_0(x_0)m_x_0x_1⋯ m_x_N-1x_N,
and gives rise to a flow
of one-time marginals
μ_t(x_t) = ∑_x_ℓ≠ t(x_0,x_1,…,x_N), t∈𝒯.
We denote by 𝒫(ν_0,ν_N) the family of probability distributions on ℱ𝒫_0^N having the prescribed marginals ν_0(·) and ν_N(·).
We seek a distribution in this set which is closest to the prior in relative entropy where, for P and Q measures on 𝒳^N+1, the relative entropy (divergence, Kullback-Leibler index) (PQ) is
(PQ):={[ ∑_xP(x)logP(x)/Q(x), (P)⊆ (Q),; +∞ , (P)⊈ (Q), ].
Here, by definition, 0·log 0=0.
Naturally, while the value of (PQ) may turn out negative due to miss-match of scaling (in case Q= is not a probability measure), the relative entropy is always jointly convex.
We consider the Schrödinger Bridge Problem (SBP):
Determine
^*[ν_0,ν_N]:= argmin{(P) | P∈𝒫(ν_0,ν_N)
}.
The following result is a slight generalization (to time inhomogeneous prior) of <cit.>.
Assume that the product M(N-1) M(N-2)⋯ M(1) M(0) has all entries positive.
Then there exist nonnegative functions φ(·) and φ̂(·) on [0,N]×𝒳 satisfying
φ(t,i) = ∑_jm_ij(t)φ(t+1,j),
φ̂(t+1,j) = ∑_im_ij(t)φ̂(t,i),
for t∈[0,N-1], along with the (nonlinear) boundary conditions
φ(0,x_0)φ̂(0,x_0) = ν_0(x_0)
φ(N,x_N)φ̂(N,x_N) = ν_N(x_N),
for x_0, x_N∈𝒳.
Moreover, the solution ^*[ν_0,ν_N] to Problem <ref> is unique and obtained by
^*(x_0,…,x_N)=ν_0(x_0)π_x_0x_1(0)⋯π_x_N-1x_N(N-1),
where the one-step transition probabilities
π_ij(t):=m_ij(t)φ(t+1,j)/φ(t,i)
are well defined.
The factors φ and φ̂ are unique up to multiplication of φ by a positive constant and division of φ̂ by the same constant.
Let φ(t) and φ̂(t) denote the column vectors with components φ(t,i) and φ̂(t,i), respectively, with i∈𝒳. In matricial form, (<ref>), (<ref>
) and (<ref>) read
φ(t)=M(t)φ(t+1), φ̂(t+1)=M(t)^Tφ̂(t),
and
Π(t):=[π_ij(t)]=(φ(t))^-1M(t)(φ(t+1)).
Historically, the SBP was posed in 1931 by Erwin Schrödinger for Brownian particles with a large deviations of the empirical distribution motivation <cit.>, see <cit.> for a survey.
The problem was considered in the context of Markov chains and studied in <cit.>, and some generalizations have been discussed in <cit.>. Important connections between SBP and OMT <cit.> have been discovered and developed in <cit.>.
§.§ Iterated Bridges
In this section we explain a rather interesting property of Schrödinger bridges which is the following. If, after solving an SBP for a given set of marginals (ν_0,ν_N) and a Markovian prior to obtain ^*[ν_0,ν_N], we decided to update the data (ν_0,ν_N) to another set of marginals (π_0,π_N) then, whether we use as prior or ^*[ν_0,ν_N] for the SBP with the new marginals π_0 and π_N, we obtain precisely the same solution ^*[π_0,π_N]. The significance of this property will be discussed later on in the context of robust transportation.
Indeed, take ^*[ν_0,ν_N] as prior and consider the corresponding new Schrödinger system (in matrix form)
ψ(t)=Π(t)ψ(t+1), ψ̂(t+1)=Π(t)^Tψ̂(t),
with boundary conditions
ψ(0,x_0)ψ̂(0,x_0) = π_0(x_0),
ψ(N,x_N)ψ̂(N,x_N) = π_N(x_N).
Note in the above Π(t)=(φ(t))^-1M(t)(φ(t+1)), therefore, it can be written as
(φ(t))ψ(t)=M(t)(φ(t+1))ψ(t+1),
(φ(t+1))^-1ψ̂(t+1)=M(t)^Tφ(t))^-1ψ̂(t).
The new transition matrix Q^* is given by
Q^*(t) =(ψ(t))^-1Π(t)(ψ(t+1))
=(ψ(t))^-1(φ(t))^-1
× M(t)(φ(t+1))(ψ(t+1)).
Let ψ_1(t)=(φ(t))ψ(t) and ψ̂_1(t)=(φ(t))^-1ψ̂(t), then
Q^*(t)=(ψ_1(t))^-1M(t)(ψ_1(t+1)).
By (<ref>), ψ and ψ̂ are vectors with positive components satisfying
ψ_1(t)=M(t)ψ_1(t+1), ψ̂_1(t+1)=M(t)^Tψ̂_1(t).
Moreover, they satisfy the boundary conditions
ψ_1(0,x_0)ψ̂_1(0,x_0) = π_0(x_0)
ψ_1(N,x_N)ψ̂_1(N,x_N) = π_N(x_N).
Thus, (ψ_1,ψ̂_1) provide the solution to Problem <ref> when is taken as prior.
Alternatively, observe the transition matrix Q^*(t) resulting from the two problems is the same and so is the initial marginal. Hence, the solutions of the SBP with marginals π_0 and π_N and prior transitions Π(t) and M(t) are identical.
Thus, “the bridge over a bridge over a prior" is the same as the “bridge over the prior,” i.e., iterated bridges produce the same result.
It is should be observed that this result for probability distributions is not surprising since the solution is in the same reciprocal class as the prior (namely, it has the same three times transition probability), cf. <cit.>. It could then be described as the fact that only the reciprocal class of the prior matters; this is can be seen from Schrödinger's original construction <cit.>, and also <cit.> for the case of Markov chains. This result, however, is more general since the prior is not necessarily a probability measure.
In information theoretic terms, the bridge (i.e., probability law on path spaces) corresponding to Q^* is the I-projection
in the sense of Cziszar <cit.> of the prior onto the set of measures that are consistent with the initial-final marginals. The above result, however, is not simply an “iterated information-projection" property, since ^*[ν_0,ν_N] is the I-projection of onto 𝒫(ν_0,ν_N) which does not contain 𝒫(π_0,π_N) being in fact disjoint from it.
§.§ Invariance of most probable paths
Building on the logarithmic transformation of Fleming, Holland, Mitter and others, the connection between SBP and stochastic control was developed from the early nineties on <cit.>. More recently Brockett studied steering of the Louiville equation <cit.>. In <cit.>, Dai Pra established an interesting path-space property of the Schrödinger bridge for diffusion processes, that the “most probable path" <cit.> of the prior and the solution are the same. Loosely speaking, a most probable path is similar to a mode for the path space measure P. More precisely, if both drift b(·,·) and diffusion coefficient σ(·,·) of the Markov diffusion process
dX_t=b(X_t,t)dt+ σ(X_t,t)dW_t
are smooth and bounded, with σ(x,t)σ(x,t)^T>η I, η>0, and {x(t) | 0≤ t≤ T} is a path of class C^2, then there exists an asymptotic estimate of the probability P of a small tube around x(t) of radius ϵ. It follows from this estimate that the most probable path is the minimizer in a deterministic calculus of variations problem where the Lagrangian is an Onsager-Machlup functional, see <cit.> for the full story[The Onsager-Machlup functional was introduced in <cit.> to develop a theory of fluctuations in equilibrium and nonequilibrium thermodynamics. ].
The concept of most probable path is, of course, much less delicate in our discrete setting. We define it for general positive measures on paths. Given a positive measure as in Section <ref> on the feasible paths of our graph G, we say that x=(x_0,…,x_N)∈ℱ𝒫_0^N is of maximal mass if for all other feasible paths y∈ℱ𝒫_0^N we have (y)≤(x). Likewise we consider paths of maximal mass connecting particular nodes. It is apparent that paths of maximal mass always exist but are, in general, not unique. If is a probability measure, then the maximal mass paths - most probable paths are simply the modes of the distribution. We establish below that the maximal mass paths joining two given nodes under the solution of a Schrödinger Bridge problem as in Section <ref> are the same as for the prior measure.
Consider marginals ν_0 and ν_1 in Problem <ref>. Assume that ν_0(x)>0 on all nodes
x∈𝒳 and that the product M(N-1)· M(N-2)⋯ M(1)· M(0) of transition probability matrices of the prior has all positive elements (cf. with M's as in (<ref>)). Let x_0 and x_N be any two nodes. Then, under the solution ^*[ν_0,ν_N] of the SBP, the family of maximal mass paths joining x_0 and x_N in N steps is the same as under the prior measure .
Suppose path y=(y_0=x_0,y_1,…, y_N-1,y_N=x_N) has maximal mass under the prior . In view of (<ref>) and (<ref>) and assumption (<ref>), we have
^*[ν_0,ν_N](y) = ν_0(y_0)π_y_0y_1(0)⋯π_y_N-1y_N(N-1)
= ν_0(x_0)/μ_0(x_0)φ(N,x_N)/φ(0,x_0)(y_0,y_1,…,y_N).
Since the quantity
ν_0(x_0)/μ_0(x_0)φ(N,x_N)/φ(0,x_0)
is positive and does not depend on the particular path joining x_0 and x_N, the conclusion follows.
The calculation in the above proof actually establishes the following stronger result.
Let x_0 and x_N be any two nodes in 𝒳.
Then, under the assumptions of Proposition <ref>, the measures and ^*[ν_0,ν_N], restricted on the set of paths that begin at x_0 at time 0 and end at x_N at time N, are identical.
§ ROBUST TRANSPORT
In this section, we first discuss notions of efficiency of a transportation plan and then introduce entropy as a surrogate for robustness.
§.§ Efficiency of a transport plan
Inspired by the celebrated paper <cit.>, we introduce below a measure of efficiency of a transportation plan over a certain finite-time horizon and a given network.
For the case of undirected and connected graphs,
small-world networks <cit.> were identified as networks being highly clustered but with small characteristic path length L, where
L:=
1/n(n-1)∑_i≠ jd_ij
and d_ij is the shortest path length between vertices i and j. The inverse of the characteristic path length L^-1 is an index of efficiency of G. There are other such indexes, most noticeably the global efficiency E_ glob introduced in <cit.>. This is defined as E_ glob=E( G)/E( G_ id) where
E( G)=1/n(n-1)∑_i≠ j1/d_ij
and G_ id is the complete network with all possible edges in place. Thus, 0≤ E_ glob≤ 1.
However, as argued on <cit.>, it is 1/L which “measures the efficiency of a sequential system (i.e., only one packet of information goes along the network)". E_ glob, instead, measures the efficiency of a parallel system, namely one in which all nodes concurrently exchange packets of information. Since we are interested in the efficiency of a specific transportation plan, we define below efficiency by a suitable adaptation of the index L.
Consider a strongly connected, aperiodic, directed graph G=(𝒳,ℰ) as in Section <ref>. To each edge ij is now associated a length l_ij≥ 0. If ij∉ℰ, we set l_ij=+∞. The length may represent distance, cost of transport/communication/etc. Let 𝒯={0,1,…,N} be the time-indexing set. For a path x=(x_0,…,x_N)∈𝒳^N+1, we define the length of x to be
l(x)=∑_t=0^N-1l_x_tx_t+1.
We consider the situation where initially at time t=0 the mass is distributed on 𝒳 according to ν_0(x) and needs to be distributed according to ν_N(x) at the final time t=N. These masses are normalized to sum to one, so that they are probability distributions. A transportation plan P is a probability measure on the (feasible) paths of the network having the prescribed marginals ν_0 and ν_N at the initial and final time, respectively. A natural adaptation of the characteristic path length is to consider the average path length of the transportation plan P, which we define as
L(P)=∑_x∈𝒳^N+1l(x)P(x)
with the usual convention +∞× 0=0. This is entirely analogous to a thermodynamic quantity, the internal energy, which is defined as the expected value of the Hamiltonian observable in state P. Clearly, L(P) is finite if and only if the transport takes place on actual, existing links of G. Moreover, only the paths which are in the support of P enter in the computation of L(P). One of the goals of a transportation plan is of course to have small average path length since, for instance, cost might simply be proportional to length. Determining the probability measure that minimizes (<ref>) can be seen to be an OMT problem.
§.§ Problem formulation
Besides efficiency, another desirable property of a transport strategy is to ensure robustness with respect to links/nodes failures, the latter being due possibly to malicious attacks. We therefore seek a transport plan in which the mass spreads, as much as it is allowed by the network topology, before reconvening at time t=N in the sink nodes. We achieve this by selecting a transportation plan P that has a suitably high entropy S(P), where
S(P)=-∑_x∈𝒳^N+1P(x)ln P(x).
Thus, in order to attain a level of robustness while guaranteeing a relatively low average path length (cost), we formulate below a constrained optimization problem that weighs in both S(P) as well as L(P).
We begin by letting L̅ designate a suitable bound on the average path length (cost) that we are willing accept. Clearly, we need that
l_m:=min_x∈𝒳^N+1 l(x)≤L̅.
We will also assume that
L̅≤1/|ℱ𝒫_0^N|∑ _x∈ℱ𝒫_0^Nl(x).
The rationale behind the latter, i.e., requiring an upper bound as stated, will be explained in Proposition <ref> below.
Let 𝒫 denote the family of probability measures on 𝒳^N+1. The probability measure that maximizes the entropy S(P) subject to a path-length constraint L(P)=L̅ is the Boltzmann distribution
P_T^*(x)=Z(T)^-1exp[-l(x)/T], Z(T)=∑_xexp[-l(x)/T],
where the parameter (temperature) T depends on L̅. To see this,
consider the Lagrangian
L(P,λ):=S(P)+λ(L̅- L(P)),
and observe that the Boltzman distribution (<ref>) satisfies the first order optimality condition of with T=1/λ.
Clearly, the Boltzmann distribution has support on the feasible paths ℱ𝒫_0^N.
Hence, we get a
version of Gibbs' variational principle that the Boltzmann distribution P_T^* minimizes the free energy functional
F(P,T):=L(P)-TS(P)
over 𝒫. An alternative way to establish the minimizing property of the Boltzmann's distribution is to observe that
F(P,T)=T(PP_T^*)-Tlog Z,
and therefore, minimizing the free energy over 𝒫 is equivalent to minimizing the relative entropy (PP_T^*) over P∈𝒫,
which ensures that the minimum is unique. The following properties of P_T^* are noted, see e.g. <cit.>.
The following hold:
i) For T↗+∞, P_T^* tends to
the uniform distribution on all feasible paths.
ii) For T↘ 0, P_T^* tends to concentrate on the set of feasible paths having minimal length.
iii) Assuming that l(·) is not constant over ℱ𝒫_0^N then, for each value L̅ satisfying the bounds (<ref>), there exists a unique nonnegative value of T=λ^-1∈ [0, ∞] such that P_T^* maximizes S(P) subject to L(P)=L̅.
We also
observe the Markovian nature of the measure P_T^*.
Indeed, recall that a positive measure on 𝒳^N+1 is Markovian if it can be expressed as in (<ref>).
Since
P_T^*(x_0,x_1,…,x_N)=Z(T)^-1∏_t=0^N-1exp[-l_x_tx_t+1/T],
which is exactly in the form (<ref>), we conclude that P_T^* is (time-homogeneous) Markovian with uniform initial measure μ(x_0)≡ Z(T)^-1 and time-invariant transition matrix given by
M_T=[exp(-l_ij/T)]_i,j=1^n.
Observe however that, in general, M_T is not stochastic (rows do not sum to one). Moreover, observe that, after suitable normalization, M_T represents the transition matrix of a chain where probabilities of transition between nodes are inversely proportional to the length of the links.
Consider now ν_0 and ν_N distributions on 𝒳. These are the “starting” and “ending” concentrations of resources for which we seek a transportation plan. We denote by 𝒫(ν_0,ν_N) the family of probability distributions on paths x∈𝒳^N+1 having ν_0 and ν_N as initial and final marginals, respectively, and we
consider the problem to maximize the entropy subject to marginal and length constraints:
Maximize S(P) subject to P∈𝒫(ν_0,ν_N) and
L(P)=L̅.
Note that the solution to Problem <ref> depends on L̅ as well as the two marginals ν_0, ν_N and that when L̅ is too close to l_m, the problem may be infeasible.
Once again, bringing in the Lagrangian (<ref>), which now needs to be minimized over 𝒫(ν_0,ν_N), we see that Problem <ref> is equivalent to solving the following Schrödinger Bridge problem for a suitable value of the parameter T.
minimize {(PP_T^*)| P∈𝒫(ν_0,ν_N)}.
Thus, employing path space entropy as a measure of robustness, the solution to Problem <ref>, denoted by ^*_T(ν_0,ν_N) and constructed in accordance with Theorem <ref>, minimizes a suitable free energy functional with the temperature parameter
specifying the tradeoff between
efficiency and robustness.
Thus, Problem <ref> can be viewed as an SBP as in Section <ref> where the “prior" measure P_T^* is Markovian.
§ STRUCTURE OF ROBUST TRANSPORT
We now address in detail
Problem <ref>, namely, to identify a probability distribution P on ℱ𝒫_0^N that minimizes (·P_T^*) over 𝒫(ν_0,ν_N) where P^*_T is the Boltzmann distribution (<ref>)–the minimizing law being denoted by ^*_T[ν_0,ν_N] as before.
We show below that the two invariant properties discussed in the previous two sessions can be used to determine
an optimal
transport policy. We also show that the ^*_T[ν_0,ν_N] inherits from the Boltzmann distribution P^*_T properties as dictated by Proposition <ref>.
Initially, for simplicity, we consider the situation where at time t=0 the whole mass is concentrated on node 1 (source) and at time t=N it is concentrated on node n (sink), i.e., ν_0(x)=δ_1(x) and ν_N(x)=δ_n(x). We want to allow (part of) the mass to reach the end-point “sink” node, if this is possible, in less than N steps and then remain there until t=N. In order to ensure that is possible, we assume that there exists a self-loop at node n, i.e., M_Tnn>0. Clearly, ^*_T(δ_1,δ_n)(·)=P^*_T[·| Y_0=1, Y_N=n]. The Schrödinger bridge theory provides transition probabilities so that,
for a path y=(y_0,y_1,…,y_N),
^*_T(δ_1,δ_n)(y) =δ_1(y_0)∏_t=1^N-1exp(-l_y_ty_t+1/T)φ_T(t+1,y_t+1)/φ_T(t,y_t)
=δ_1(y_0)φ_T(N,y_N)/φ_T(0,y_0)[exp(-l(y)/T)],
cf. (<ref>) and (<ref>).
Here l(y)=∑_t=0^N-1l_y_ty_t+1 is the length of path y and φ_T satisfies together with φ̂_T the Schrödinger system (<ref>) with m_ij(t)=exp(-l_ij/T) and ν_0(x)=δ_1(x), ν_N(x)=δ_n(x).
In <cit.>, Problem <ref> was first studied with a prior measure _l having certain special properties. To introduce this particular measure, we first recall (part of) a fundamental result from linear algebra <cit.>.
Let A=(a_ij) be an n× n matrix with nonnegative entries. Suppose there exists N such that A^N has only positive entries,
and let λ_A be its spectral radius. Then
i) λ_A>0 is an eigenvalue of A;
ii) λ_A is a simple eigenvalue;
iii) there exists an eigenvector v corresponding to λ_A with strictly positive entries.
Consider now the weighted adjacency matrix B=M_T in (<ref>)
(where we dropped the subscript T as it will be fixed throughout this section). Assume that B^N has all positive elements so that we can apply the Perron-Frobenius theorem. Let u and v be the left and right eigenvectors with positive components of the matrix B corresponding to the spectral radius λ_B. We have
B^Tu=λ_Bu, Bv=λ_Bv.
We assume throughout that u and v are chosen so that ∑_iu_iv_i=1.
Then, for y_0=i and y_t=j, define
_l(i,y_1,…,_t-1,j):=λ_B^-tu_iv_je^-∑_k=0^t-1l_y_ky_k+1.
The corresponding transition matrix is
R_l=λ_B^-1(v)^-1B(v).
It admits the invariant measure
μ_l(i)=u_i v_i.
Note that _l and the Boltzmann distribution P_T^* have the same transition matrix but different initial distributions. In <cit.>, to which we refer for motivation and more details, the following problem was studied.
minimize {(P_l)| P ∈𝒫(ν_0,ν_N)}.
Under the assumption that B^N has all positive entries, this Schrödinger bridge problem has a unique solution ^*_l. In <cit.>, it was also shown that _l is itself the solution of a Schrödinger bridge problem with equal marginals and the Boltzmann distribution (<ref>) as prior. Thus, by the iterated bridge property of Section <ref>, ^*_l coincides with the solution of Problem <ref> for any choice of the initial-final marginals ν_0 and ν_N.
We recall the following rather surprising result <cit.> which includes the invariance of the most probable paths in Problem <ref> (Proposition <ref>).
^*_l gives equal probability to paths y∈𝒳^N+1 of equal length between any two given nodes. In particular, it assigns maximum and
equal probability to minimum length paths.
This result is relevant when the solution of Problem <ref> for low temperature is used as an approximation to OMT, see Remark <ref> in the next section.
Finally, an important special case occurs when l_ij=0 for existing links and +∞ for non-existing. Then the matrix B reduces to the unweighted adjacency matrix A and the measure _l to the so-called Ruelle-Bowen random walk _RB. The only concern in the transport policy is in maximizing path family entropy to achieve robustness, see <cit.> for details.
§ DEPENDENCE OF
ROBUST TRANSPORT ON T
Below we study how the solution ^*_T[δ_x_0,δ_x_N] to Problem <ref> varies with the temperature parameter T. Here, x_0, x_N are specified nodes where mass is concentrated at the start and end times, and δ_x^'(x)=1 when x=x^' and zero otherwise. It should be noted that similar results hold for general marginal distributions as well, which are not necessarily Dirac.
Consider the solution ^*_T[δ_x_0,δ_x_N]=:^*_T to Problem <ref> with ν_0(x)=δ_x_0(x) and ν_N(x)=δ_x_N(x). Let l_m(x_0,x_N)=min_y∈𝒳^N+1(x_0,x_N) l(y), i.e., the minimum length of N-step paths originating in x_0 and terminating in x_N. Then
i) For T↘ 0, ^*_T tends to concentrate itself on the set of feasible, minimum length paths joining x_0 and x_N in N steps. Namely, if y=(y_0=x_0,y_1,…,y_N-1,y_N=x_N) is such that l(y)>l_m(x_0,x_N), then ^*_T(y)↘ 0 as T↘ 0.
ii) For T↗+∞, ^*_T tends to the uniform distribution on all feasible paths joining x_0 and x_N in N steps.
iii) Suppose 𝒳^N+1(x_0,x_N) is not a singleton and that l(·) is not constant over it. Then, for each value L̅ satisfying the bounds
l_m(x_0,x_N)≤L̅≤1/|𝒳^N+1(x_0,x_N)|∑ _y∈𝒳^N+1(x_0,x_N)l(y)
there exists a unique value of T∈ [0,+∞] such that ^*_T satisfies the constraint
L(^*_T)=L̅ and therefore solves Problem <ref>.
Observe first that, since ^*_T is a probability measure on 𝒳^N+1, it must satisfy by (<ref>)
1=∑_y∈𝒳^N+1^*_T(y) =∑_y∈𝒳^N+1δ_1(y_0)φ_T(N,y_N)/φ_T(0,y_0)[exp(-l(y)/T)]
=
∑_y∈𝒳^N+1(x_0,x_N)δ_1(y_0)φ_T(N,x_N)/φ_T(0,x_0)[exp(-l(y)/T)],
where we have used the fact that the initial and final marginals of ^*_T are δ_1 and δ_n, respectively. It follows that
φ_T(0,x_0)/φ_T(N,x_N) =∑_y∈𝒳^N+1(x_0,x_N)δ_1(y_0)[exp(-l(y)/T)]
=∑_y∈𝒳^N+1(x_0,x_N)[exp(-l(y)/T)],
where again 𝒳^N+1(x_0,x_N) denotes the family of paths joining x_0 and x_N in N time periods.
Proof of i): Let y=(y_0=x_0,y_1,…,y_N-1,y_N=x_N) be such that l(y)>l_m(x_0,x_N). Then
^*_T(y)=φ_T(N,x_N)/φ_T(0,x_0)[exp(-l(y)/T)].
By (<ref>), we have
φ_T(0,x_0)/φ_T(N,x_N)≥exp(-l_m(x_0,x_N)/T).
Hence,
^*_T(y) =φ_T(N,x_N)/φ_T(0,x_0)e^-l(y)/T≤ e^-l(y)-l_m(x_0,x_N)/T.
Since l(y)-l_m(x_0,x_N)>0, the right-hand side tends to zero as T↘ 0.
Proof of ii): For T↗ +∞, exp(-l(y)/T) tends to 1 for all paths y∈𝒳^N+1(x_0,x_N). Since φ_T(N,x_N)/φ_T(0,x_0) does not depend on the specific path in 𝒳^N+1(x_0,x_N) (it is just a normalization like the partition function), we conclude that as T tends to infinity, ^*_T tends to the uniform distribution on 𝒳^N+1(x_0,x_N).
Proof of iii): Note that Problem <ref> is feasible when l_m(x_0,x_N)≤L̅
holds. By standard Lagrangian duality theory, there exists a Lagrangian multiplier λ∈ [0, ∞] such that the maximizer of the corresponding Lagrangian (<ref>) over 𝒫(ν_0,ν_N) is the solution of Problem <ref>[Actually, using (<ref>), it is easy to see that L(^*_T)= _^*_T[l(Y)] is a strictly increasing function of T. Indeed,
∂_^*_T[l(Y)]/∂ T=1/T^2_^*_T[l(Y)]
where l(Y)=∑_t=0^N-1l_Y_tY_t+1 and Y=(Y_0, Y_1,…,Y_N) is the Markov chain. In view of Points 1 and 2, we conclude that _^*_T[l(Y)] bijectively maps [0,+∞] onto
[l_m(x_0,x_N),1/|𝒳^N+1(x_0,x_N)|∑ _y∈𝒳^N+1(x_0,x_N)l(y)].
].
On the other hand, maximizing (<ref>) over 𝒫(ν_0,ν_N) is equivalent to solving Problem <ref> with T=1/λ. This completes the proof.
Let us interpret l_ij as the cost of transporting a unit mass over the link ij. Then L(P) is the expected cost corresponding to the transport plan P. For T=0, the free energy functional reduces to L(P) as our problem amounts to a discrete OMT problem <cit.>. In this, one seeks minimum cost paths –a combinatorial problem which can also be formulated as a linear programming problem <cit.>.
Precisely as in the diffusion case <cit.>, we also see that when the “heat bath" temperature is close to 0, the solution of the Schrödinger bridge problem is close to the solution of the discrete OMT problem (claim i) of Theorem <ref>). Since for the former an efficient iterative algorithm is available <cit.>, we see that also in this discrete setting the SBP provides a valuable computational approach to solving OMT problems. This is illustrated in the next section through an academic example. It should also be observed that the measure ^*_T(δ_1,δ_n) is just a “Boltzmann measure" on the subset of 𝒳^N+1 of paths originating in x_0 and terminating in x_N.
Thus the above proof is analogous to the classical one for P^*_T.
§ EXAMPLES
Consider the graph in Figure <ref>.
We seek to transport a unit mass from node 1 to node 9 in N =3 and 4 steps. We first consider the case where the costs of all the edges are equal to 1. Here we add a zero cost self-loop at 9, i.e., l_99=0. The shortest path from node 1 to 9 is of length 3 and there are three such paths, which are
1-2-7-9, 1-3-8-9 and 1-4-8-9. If we want to transport the mass with a minimum number of steps, we may end up using one of these three paths. To achieve robustness, we apply the Schrödinger bridge framework. Since all the three feasible paths have equal length, we get a transport plan with equal probabilities using all these three paths, regardless of the choice of temperature T. The evolution of mass distribution is given by
[
1 0 0 0 0 0 0 0 0
0 1/3 1/3 1/3 0 0 0 0 0
0 0 0 0 0 0 1/3 2/3 0
0 0 0 0 0 0 0 0 1
],
where the four rows of the matrix show the mass distribution at time step t=0, 1, 2 ,3 respectively. As we can see, the mass spreads out first and then goes to node 9. When we allow for more steps N=4, the mass spreads even more before reassembling at node 9, as shown below, for T=1,
[
1 0 0 0 0 0 0 0 0
0 0.4705 0.3059 0.2236 0 0 0 0 0
0 0 0.0823 0.0823 0.1645 0 0.2236 0.4473 0
0 0 0 0 0 0.0823 0.0823 0.1645 0.6709
0 0 0 0 0 0 0 0 1
].
There are 7 feasible paths of length 4, which are 1-2-7-9-9, 1-3-8-9-9, 1-4-8-9-9, 1-2-5-6-9, 1-2-5-7-9, 1-3-4-8-9 and 1-2-3-8-9. The amount of mass traveling along these paths are
0.2236, 0.2236, 0.2236, 0.0823, 0.0823, 0.0823, 0.0823.
The first three are the most probable paths. This is consistent with Proposition <ref> since they are the paths with minimum length. If we change the temperature T, the flow changes. The set of most probable paths, however, remains invariant. In particular, when T=0.1, the flow concentrates on the most probable set (effecting OMT-like transport), as shown below
[
1 0 0 0 0 0 0 0 0
0 0.3334 0.3333 0.3333 0 0 0 0 0
0 0 0 0 0 0 0.3334 0.6666 0
0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 1
].
Now we change the graph by setting the length of edge (7, 9) as 2, that is, l_79=2. When N=3 steps are allowed to transport a unit mass from node 1 to node 9, the evolution of mass distribution for the optimal transport plan, for T=1, is given by
[
1 0 0 0 0 0 0 0 0
0 0.1554 0.4223 0.4223 0 0 0 0 0
0 0 0 0 0 0 0.1554 0.8446 0
0 0 0 0 0 0 0 0 1
].
The mass travels through paths 1-2-7-9, 1-3-8-9 and 1-4-8-9, but unlike the first case, the transport plan doesn't take equal probability for these three paths. Since the length of the edge (7, 9) is larger, the probability that the mass takes this path becomes smaller. The plan does, however, assign equal probability to the two paths 1-3-8-9 and 1-4-8-9 with minimum length, that is, these are the most probable paths. The evolutions of mass for T=0.1 and T=100 are
[
1 0 0 0 0 0 0 0 0
0 0 1/2 1/2 0 0 0 0 0
0 0 0 0 0 0 0 1 0
0 0 0 0 0 0 0 0 1
]
and
[
1 0 0 0 0 0 0 0 0
0 0.3311 0.3344 0.3344 0 0 0 0 0
0 0 0 0 0 0 0.3311 0.6689 0
0 0 0 0 0 0 0 0 1
]
respectively. We observe that, when T=0.1 the flow assigns almost equal mass to the three available paths, while, when T=100, the flow concentrate on the most probable paths 1-3-8-9 and 1-4-8-9. This is clearly a consequence of Theorem <ref>.
§ CONCLUSIONS AND OUTLOOK
In the present paper, we considered transportation over strongly connected, directed graphs. The development built on our earlier work <cit.>. More specifically, we introduced as measure of efficiency for a given transportation plan the average path length (cost) in e.g., (<ref>), and as a measure of robustness the entropy (<ref>). This allowed us to
explore efficient-robust transport plans via solving corresponding optimization problems. Important insights gained in the present work include the results on certain invariances of the Schrödinger's bridges – the “iterated bridge” invariance property and the invariance of the “most probable path”. We explained their relevance for efficient-robust transport over networks. We also considered the dependence of the optimal transportation schedule on the temperature parameter following similar ideas from statistical physics. In this, we highlighted the connection between the Schrödinger's bridge problem and OMT. Specifically, the solution of the Schrödinger bridge problem near-zero temperature is an approximation to the solution of the corresponding OMT problem. The relevance of the conceptual framework developed here for assessing robustness of real world networks (e.g., communication networks <cit.>, biological <cit.>, and financial <cit.>) will be the subject of future work.
.97
99
Albertetal2000 R. Albert, H. Jeong and A.-L. Barabási, Error and attack tolerance of complex networks, Nature, 406, pp. 378–382, 2000.
Alon U. Alon, An Introduction to Systems Biology: Design Principles of Biological Circuits, Chapman and Hall, 2006.
AGS L. Ambrosio, N. Gigli and G. Savaré, Gradient Flows in Metric Spaces and in the Space of Probability Measures, Lectures in Mathematics ETH Zürich, Birkhäuser Verlag, Basel, 2nd ed. 2008.
bazaraa2011linear
M. S. Bazaraa, J. J. Jarvis, and H. D. Sherali, Linear programming and
network flows, John Wiley & Sons,
2011.
bara2014robustness
A.-L. Barabàsi, Network Science, 2014.
olver2010robust
N. K. Olver, Robust Network Design.1em plus 0.5em minus
0.4emPh.D. Dissertation, Department of Mathematics and Statistics,
McGill University, 2010.
arnoldetal1994 L. Arnold, V. M. Gundlach and L. Demetrius, Evolutionary formalism for products of positive random matrices, The Annals of Applied Probability, 4, 3, pp. 859–901, 1994.
blaquiere1992controllability
A. Blaquière, Controllability of a Fokker-Planck equation, the
Schrödinger system, and a related stochastic optimal control (revised
version), Dynamics and Control, vol. 2, no. 3, pp. 235–253, 1992.
brockett2012notes
R. Brockett, Notes on the control of the Liouville equation, in
Control of Partial Differential Equations.1em plus 0.5em minus
0.4emSpringer, 2012, pp. 101–129.
CGP1 Y. Chen, T.T. Georgiou and M. Pavon, On the relation between optimal transport and Schrödinger bridges: A stochastic control viewpoint, J. Optim. Theory and Applic., 169 (2), 671-691, 2016.
CGP2 Y. Chen, T.T. Georgiou and M. Pavon, Optimal transport over a linear dynamical system, IEEE Trans. Aut. Contr., 62 (5), 1558-2523, 2017.
CGP3 Y. Chen, T.T. Georgiou and M. Pavon, Entropic and displacement interpolation:
a computational approach using the Hilbert metric, SIAM Journal on Applied Mathematics, 76 (6), 2375-2396, 2016.
chen2016networks
Y. Chen, T. T. Georgiou, M. Pavon, and A. Tannenbaum, “Robust transport over
networks,” IEEE Trans. Aut. Contr., 62, n.9, 4675-4682, 2017.
COVER_THOMAS
T. M. Cover and J. A. Thomas.
Elements of Information Theory.
Wiley, New York, 1991.
csiszar Csiszár, Imre. "Sanov property, generalized I-projection and a conditional limit theorem." The Annals of Probability: 768-793, 1984.
Cuturi M.Cuturi, “Sinkhorn Distances: Lightspeed Computation of
Optimal Transport", Advances in Neural Information Processing Systems,
2292-2300, 2013.
Dai91
P. Dai Pra, A stochastic control approach to reciprocal diffusion
processes, Applied mathematics and Optimization, vol. 23, no. 1, pp.
313–329, 1991.
DaiPav90
P. Dai Pra and M. Pavon, On the Markov processes of Schrödinger, the
Feynman–Kac formula and stochastic control, in Realization and
Modelling in System Theory.1em plus 0.5em minus 0.4emSpringer,
1990, pp. 497–504.
demetrius2005 L. Demetrius and T. Manke, Robustness and network evolution: an entropic principle, Physica A: Statistical Mechanics and its Applications, 346, 3, pp. 682–696, 2005.
DB D. Durr and A. Bach, The Onsager-Machlup functional as Lagrangian for the most probable path of a diffusion process, Comm. Math. Phys. 60, 1978, 153-170.
FilHonStr08
R. Filliger, M.-O. Hongler, and L. Streit, Connection between an exactly
solvable stochastic optimal control problem and a nonlinear
reaction-diffusion equation, JOTA, 137, no. 3, 497–505, 2008.
georgiou2015positive
T. T. Georgiou and M. Pavon, “Positive contraction mappings for classical and
quantum Schrödinger systems,” J. Math. Physics,
vol. 56, no. 3, p. 033301, 2015.
GKW A. Galichon, S. Kominers and S. Weber, The Nonlinear Bernstein-Schrödinger Equation in Economics, Proceedings of the Second Conference “Geometric Science of Information", F. Nielsen and F. Barbaresco, eds., Springer Lecture Notes in Computer Sciences 9389, 51-59, 2015.
horn2012matrix
R. Horn and C. Johnson, Matrix analysis, Cambridge Univ. Press, 2012.
IW N. Ikeda and S. Watanabe, Stochastic differential equations and diffusion processes, 2nd Ed., North-Holland, 1989.
IK C. T. Ireland and S. Kullback, “Contingency tables with given marginals", Biometrika, 55, 179-188, 1968.
Jam74 B. Jamison, Reciprocal processes, Probability Theory and Related Fields 30.1 (1974), 65-86.
latora2001efficient
V. Latora and M. Marchiori, “Efficient behavior of small-world networks,”
Physical Review Letters, vol. 87, no. 19, p. 198701, 2001.
leo C. Léonard, From the Schrödinger problem to the Monge-Kantorovich problem, J. Funct. Anal., 2012, 262, 1879-1920.
leo2 C. Léonard, A survey of the Schroedinger problem and some of its connections with optimal transport, Discrete Contin. Dyn. Syst. A, 2014, 34 (4): 1533-1574.
levy1990modeling
B. C. Levy, R. Frezza, and A. J. Krener, “Modeling and estimation of
discrete-time gaussian reciprocal processes,” Automatic Control, IEEE
Transactions on, vol. 35, no. 9, pp. 1013–1023, 1990.
Mik T. Mikami, Monge's problem with a quadratic cost by the zero-noise limit of h-path processes, Probab. Theory Relat. Fields, 129, (2004), 245-260.
MT T. Mikami and M. Thieullen, Optimal Transportation Problem by Stochastic Optimal Control, SIAM Journal of Control and Optimization, 47, N. 3, 1127-1139 (2008).
OM L. Onsager, S. Machlup, “Fluctuations and irreversible proceses, I-II", Phys. Rev. , 91 (1953) pp. 1505-1512; 1512-1515.
MicheleNotes M. Pavon, Lecture Notes, University of Padova, <http://www.math.unipd.it/ pavon/Teaching_files/notes.pdf>
pavon2010discrete
M. Pavon and F. Ticozzi, “Discrete-time classical and quantum markovian
evolutions: Maximum entropy problems on path space,” J.
Math. Physics, vol. 51, no. 4, p. 042104, 2010.
PavWak91
M. Pavon and A. Wakolbinger, On free energy, stochastic control, and
Schrödinger processes, in Modeling, Estimation and Control of
Systems with Uncertainty.1em plus 0.5em minus 0.4emSpringer,
1991, pp. 334–348.
rachev1998mass
S. T. Rachev and L. Rüschendorf, Mass Transportation Problems: Volume
I: Theory, Springer Science &
Business Media, 1998, vol. 1.
ruelle2004thermodynamic
D. Ruelle, Thermodynamic formalism: the mathematical structure of
equilibrium statistical mechanics.1em plus 0.5em minus 0.4emCambridge University Press, 2004.
Sandhu R.Sandhu, T. Georgiou, E. Reznik, L. Zhu, I. Kolesov,
Y. Senbabaoglu, and A. Tannenbaum1, “Graph curvature for differentiating cancer networks,” Scientific Reports (Nature), vol. 5, 12323; doi: 10.1038/srep12323 (2015).
Sandhu1 R. Sandhu, T. Georgiou, and A. Tannenbaum, “Ricci curvature: An economic indicator for market
fragility and systemic risk,” Science Advances, vol. 2, doi: 10.1126/sciadv.1501495, 2016.
savla2014robust
K. Savla, G. Como, and M. A. Dahleh, “Robust network routing under cascading
failures,” IEEE Trans. on Network Science and Engineering, vol. 1,
no. 1, pp. 53–66, 2014.
S1 E. Schrödinger, Über die Umkehrung der Naturgesetze, Sitzungsberichte der Preuss Akad. Wissen. Berlin, Phys. Math. Klasse (1931), 144-153.
TWY. Takahashi and S. Watanabe, The Probability Functionals (Onsager-Machlup Functions) of Diffusion Processes, Lecture Notes in Mathematics, Vol. 851, Springer-Verlag, Berlin, 1980, pp. 433-463.
Vil C. Villani, Topics in optimal transportation, AMS, 2003, vol. 58.
Vil2 C. Villani, Optimal transport: old and new, Vol. 338. Springer, 2008.
Jonck C.Wang, E. Jonckheere, and R. Banirazi, “Wireless network capacity versus Ollivier-Ricci curvature under Heat
Diffusion (HD) protocol,” Proceedings of ACC, 2013.
watts1998collective
D. J. Watts and S. H. Strogatz, “Collective dynamics of 'small-world'
networks,” Nature, vol. 393, pp. 440–442, 1998.
WC L.B. White and F. Carravetta, Optimal smoothing for finite state hidden
reciprocal processes, IEEE Trans. Automatic Control, v. 56, no. 9, 2011, pp. 2156-2161.
|
http://arxiv.org/abs/1701.07623v1 | 20170126092151 | The Multi-Blade Boron-10-based Neutron Detector for high intensity Neutron Reflectometry at ESS | [
"Francesco Piscitelli",
"Francesco Messi",
"Michail Anastasopoulos",
"Tomasz Bryś",
"Faye Chicken",
"Eszter Dian",
"Janos Fuzi",
"Carina Höglund",
"Gabor Kiss",
"Janos Orban",
"Peter Pazmandi",
"Linda Robinson",
"Laszlo Rosta",
"Susann Schmidt",
"Dezso Varga",
"Tibor Zsiros",
"Richard Hall-Wilton"
] | physics.ins-det | [
"physics.ins-det"
] |
Two-dimensional Fermi gas in antiparallel magnetic fields
Yusuke Nishida
January 2017
=========================================================
§ INTRODUCTION
In this manuscript we report about an improved design of the Multi-Blade detector for neutron relfectometry applications. The general performance of the Multi-Blade detector, particularly the spatial resolution and the counting rate capability, as described in <cit.> and summarized in the Table <ref>, are confirming that this is the right way to go for a future detector technology.
The Multi-Blade design has been improved and a demonstrator has been built at the European Spallation Source (ESS <cit.>). It has been tested at the Source Testing Facility (STF <cit.>) at the Lund University in Sweden and on a beam line at the Budapest Neutron Centre (BNC [BNC is a consortium of the Centre for Energy Research operating the 10 MW research reactor and the Wigner Reserach Centre for Physics operating there the neutron scattering facilities] <cit.>) in Hungary. A detailed description of the detector and the results of the tests are discussed in this manuscript.
§.§ Detector requirements for Reflectometry instruments
The Multi-Blade <cit.> detector has been introduced to face the challanges arising in neutron reflectometry <cit.>. The European Spallation Source (ESS) <cit.> is designed to be the world's brightest neutron source. The expected instantaneous neutron flux on detectors at ESS will be without precedent <cit.> and neutron reflectometers are the most challenging instruments. ESS will have two of these, FREIA <cit.> (horizontal reflectometer) and ESTIA <cit.> (vertical reflectometer), and the expected instantaneous local flux at the detector is between 10^5 and 5·10^5/s/mm^2. The problem with count rates in reflectometry is a general one <cit.>, and the ESS solution could potentially be applied to existing instruments at other neutron sources.
Along with the rate capability, the spatial resolution (≈ 0.5mm needed in one direction only) is another crucial aspect that needs improvements in reflectometer detectors. There is a great interest in expanding the neutron reflectometry technique beyond static structural measurements of layered structures to kinetic studies <cit.>. At current facilities (pulsed and reactor sources) the time resolution for kinetic studies is limited by the available flux. In references <cit.> a new instrument layout is presented for reactor sources to open the possibility of sub-second kinetic studies, however this requires very high spatial resolution detectors.
These needs can in general not be met with the currently available technologies. ^3He and ^6Li-scintillators are the two main technologies used in neutron detectors for reflectometry instruments. Due to the limited counting rate capability, scintillators are in general a secondary choice for high flux reflectometers and they are used as support detectors. In terms of counting rate capability, neutron detection efficiency and γ-ray sensivity <cit.>, ^3He is superior to scintillators. Therefore, most existing reflectometers <cit.> use this technology. Gaseous detector have several designs and performances. For cold neutrons (2.5-30Å), ^3He detectors have efficiencies of 50-90% <cit.>, global count rates between 20 KHz and 30 MHz over the whole detector active area (<0.5m^2). ^3He detectors are mainly proportional counters or Multi Wire Proportional Chambers (MWPC). Local count rates in MWPC can in principle go up to 10KHz/mm^2 <cit.> but it has never officially been reported in the literature for neutrons. The spatial resolution that can achieved with the ^3He technology is about 1.5 mm. Although the quantity of ^3He needed for reflectometers at ESS would be available <cit.>, the requirements in spatial resolution and counting rate capability described above can not be fulfilled with this technology. The rate capability and spatial resolution requirements at ESS exceed the performance of current ^3He technology by a factor of 10-100 and a factor of 3 respectively. On the other hand, the spatial resolution that can be achieved with Wavelength Shifting Fiber (WLS) <cit.> detectors is below 1 mm and can easily fulfil the ESS requirements for reflectometers. However, scintillators can at best reach the same counting rate capability as ^3He and they are therefore not considered a good alternative as the main detector technology for reflectometers at ESS.
Table <ref> summarizes the main requirements for the two neutron reflectometers at ESS.
The detectors needed are modest in size (≤ 500·300 mm^2), with high counting rate capability and high spatial resolution required in one direction only. Background (background neutrons and γ-rays) suppression is also an important feature that the detector should compromise with respect to the efficiency. A suitable detector shielding should prevent the detector from counting background neutrons from the environment. Depending on the γ-ray background on the instrument, a detector should provide a suitable γ-ray rejection, typically set to about 10^-6 <cit.>.
One current issue of ^3He-based detectors is the thick entrance Al-window that is needed for the high pressure of the vessel. Typically 10^-2 of the incoming neutrons are scattered by this window and detected in the detector <cit.>. For FREIA and ESTIA the desired detector window scattering is 10^-4.
§.§ The Multi-Blade concept
The Multi-Blade concept was introduced in 2005 <cit.> and two prototypes have been built in 2013 showing promising results <cit.>. We refer to this detector design as Multi-Blade 2013 and to the improved design detector presented in this manuscript as Multi-Blade 2016. The aim is to realise detectors optimized for these high rates of neutrons at ESS. A detailed description of the Multi-Blade concept can be found in <cit.> and a sketch is shown in Figure <ref>.
Other detector designs have been based on the same concept, an example can be found in <cit.> for detectors optimized for neutron diffraction, whereas the Multi-Blade has been optimized for neutron reflectometry.
The Multi-Blade is a stack of Multi Wire Proportional Chambers (MWPC) operated at atmospheric pressure with a continuous gas flow (Ar/CO_2 80/20 mixture). The Multi-Blade is made up of identical units, the so-called `cassettes'. Each cassette holds a `blade' (a flat substrate coated with ^10B_4C <cit.>) and a two-dimensional readout system, which consists of a plane of wires and a plane of strips. Each ^10B_4C-converter (blade) is inclined at grazing angle (θ = 5 degrees) with respect to the incoming neutron beam. The inclined geometry has two advantages: the neutron flux is shared among more wires with respect to the normal incidence (the counting rate capability is increased) and the spatial resolution is also improved. Moreover, the use of the ^10B_4C conversion layer at an angle also increases the detection efficiency which is otherwise limited to a few percent at thermal energies for a single converter <cit.>.
The proof of concept of the Multi-Blade design has been demonstrated with two prototypes <cit.> (Multi-Blade 2013). It has been shown that this detector can be operated a relatively small gas gain (58) and at a voltage of about 1000 V. Moreover, the spatial resolution has been measured and it is about 0.3 mm ×4 mm. It has been shown that the neutron detection efficiency increases as the inclination decreases, but it has been measured at 10 degrees, while the value at 5 degrees it has been extrapolated.
A significant concern in a modular design is the uniformity of the detector response. It has been shown that an uniformity of about 2% can be obtained and that the overlap between cassettes produces about a 2 mm gap where the efficiency is reduced to 50% with respect to the nominal efficiency. In the present prototype we are addressing these issues by reducing the gap between the cassettes.
The scattering of the materials, especially the kapton (polyimide) foil used to hold the cathode strips, used in the Multi-Blade prototype has been measured <cit.> and it resulted in a key point to be optimized to fulfil the ESS requirements.
§ IMPROVED DESIGN OF THE MULTI-BLADE DETECTOR
The improved version of the Multi-Blade (Multi-Blade 2016) has an active area of about 100× 140 mm^2 and it is made up of nine cassettes, each one is equipped with 32 wires (15 μ m diameter) and 32 4mm-wide strips (see Figure <ref> and <ref>). All cassettes are mechanically and electrically identical.
§.§ Inclined geometry and single layer
The first prototype <cit.> (Multi-Blade 2013) comprised two designs: either with one or with two ^10B_4C converters: blades coated on one side or on both sides. The latter has more technical issues that makes its realization more difficult. The single layer detector is finally the choice in order to keep the mechanics reasonably simple. The 2-layer version has been rejected; the loss in the efficiency from the reduction in number of layers can be compensated by a smaller inclination but with the extra advantage of improved resolution and higher counting rate capability.
Moreover, in the two layer configuration the choice of the substrate is also crucial, due to the scattering of crossing neutrons. On the other hand, the advantage of having only one converter is that the ^10B_4C coating can be of any thickness above 3 μ m without affecting the efficiency, while for the two layer option its thickness should be chosen carefully and the thickness uniformity must be controlled. Since the desired requirement for scattering is set to 10^-4 the single layer option will help to solve two issues at once: the critical choice of the substrate of the neutron converter (^10B_4C) and the choice of the material on which the strips lay (strip-holder). In the previous design that was a kapton (polyimide) foil that must be crossed by the neutrons in order to reach the ^10B_4C layer (Figure <ref>). Some commercial materials have been evaluated to substitute the ^10B_4C-substrate and the strip-holder foil in the Multi-Blade design in order to decrease the scattering; their composition is listed in Table <ref>.
Figure <ref> (left) shows the fractional amount of a unity neutron beam (monochromatic at 1.8Å) scattered by a layer made of the materials listed in Table <ref> as a function of their thicknesses. Only the scattering cross-sections <cit.> (coherent and incoherent) is used in this calculation since any neutron that is absorbed cannot cause any spurious event in the detector. We must consider now that these substrates and strip-holder are inclined at 5 degrees, then the actual thickness of any of these materials crossed by neutrons is a factor 11.5 larger (1/sin(5^∘)=11.5). The kapton thickness in the previous design (Multi-Blade 2013) was about 25 μ m corresponding to about 300 μ m at 5 degrees; this implies approximately 7% of scattering. This value was also confirmed by the measurements carried out in <cit.>. From Figure <ref> we can conclude that, because of the inclination, any chosen material for the strip-holder will cause an amount of scattering which is much larger than the requirement that has been set.
Moreover, the substrate where the ^10B_4C is deposited must have some mechanical strength to sustain the residual stress of the coating <cit.>. In the previous design a 2 mm-thick Al-substrate was used. For coatings deposited on a single side of the substrate, a deformation of the substrates was observed. Note that the planarity of the substrate is crucial for the uniformity of the electric field of the MWPC, and thus for the uniformity of the detector response. A simulation of the electric field have been performed in order to understand the mechanical deviations that can be accepted; they will be discussed in Section <ref>. In the current detector (Multi-Blade 2016) a 7.5μ m-thick ^10B_4C-layer has been deposited on one side of a 2 mm-thick Aluminium substrate.
Figure <ref> (right) shows the amount of absorbed neutron flux by the ^10B_4C-layer inclined at 5 degrees as a function of the neutron wavelength for several layer thicknesses. Note that the efficiency is saturated above 3 μ m and any extra film thickness will only help to absorb neutrons; e.g. a film of 5 μ m will absorb more than 95% of the neutrons at the shortest wavelength interesting for the ESS neutron reflectometers (2.5Å). This means that only about 5% of the neutrons (at the shortest wavelength) can reach the substrate and thus can be scattered. If the probability of scattering from the substrate is e.g. 10% (see Figure <ref>) the maximum possible scattering is only 0.5% at the shortest wavelength.
Referring to Figure <ref>, in the new design we decided to keep the kapton as the material for the strip-holder for the simplicity of its realization. Both the substrate and the strip-holder are hidden behind a 7.5μ m thick ^10B_4C-layer; the converter absorption in this case is 99% at 2.5Å. In this configuration the substrate and strip-holder material is not crucial for scattering since is almost not crossed by neutrons. This is finally the choice to lower the scattering within the detector at the price of a more complicated electric field geometry in the MWPC. The readout of a single ^10B_4C-converter is then performed by the facing anode wire plane and the strip plane that mechanically belongs to the adjacent cassette instead.
§.§ Multi Wire Proportional Chamber choice
§.§.§ Multi Wire Proportional Chamber rate capability considerations
Due to good position resolution, low material budget and low cost MWPCs are widely used in high-energy physics <cit.>. It is well known that in proportional counters and MWPC the voltage output depends on the count rate and the effect is explained by space charge (the presence of slowly moving positive ions) which decreases the electric field near the anodes thus decreasing the gas amplification factor <cit.>. The MWPC geometry in the Multi-Blade has been optimized in order to reach the needed counting rate capability and spatial resolution. The three main aspects of this MWPC are: the small gap between wires and strips, the low gas gain operation and the inclined geometry.
The spatial resolution is limited by the track lengths of the neutron capture reaction fragments in Ar/CO_2 at atmospheric pressure, which are in the range of a few mm. The inclination affects the lower limit of the spatial resolution that can be obtained with the wires (sub-mm) but it does not affect the resolution given by the strips. This will be discussed in more details in Section <ref>.
The inclination improves the counting rate capability as well because a neutron flux is spread over a 11.5 times wider surface (1/sin(5^∘)=11.5).
A full characterization of the counting rate capability of MWPCs can be found in <cit.>. The counting rate capability of a MWPC is proportional to 1/(h^3· s) with h the wire to cathode distance and s the wire pitch (both are about 4mm in the Multi-Blade). By changing h, the rate capability is strongly influenced <cit.>.
Experimental studies on the response of MWPCs to the rate can be found in <cit.>, all of them have been performed with X-rays and not neutrons. The high count rate does not trigger any breakdown in MWPC but only reduces the output amplitudes due to the accumulation of space charge <cit.>. In some high-rate measurements, the standard or thin-gap MWPC can be also be used if its spatial resolution satisfies the requirements <cit.>.
Almost all primary ion pairs are formed outside the multiplication region. After drifting towards the anode each electron produces, on average, the same number of secondary electrons by a charge multiplication process close to the anode wire. All secondary ion pairs generating the signal are thus approximately produced at the anode. Although the primary charge is not the same for X-rays and neutrons we can compare the two by normalizing to the same amount of secondary charge. When a neutron is converted in the ^10B_4C-layer, the neutron capture fragments that reach the gas create a primary charge of at most 9 fC (alpha particle of 1470 KeV in Ar/CO_2 (80/20), w=27eV <cit.>) which corresponds to about 55000 pairs. For a gas gain of 15, we can expect at most 800000 pairs (0.1 pC). A 6.6 KeV X-ray in the same gas mixture produces about 250 primary pairs which corresponds to a gas gain of 3200 to get the same secondary charge. Referring to <cit.> these MWPCs can reach counting rate capabilities beyond 10^4 Hz/mm^2 for the same geometrical parameters as the Multi-Blade. Hence due to the inclined geometry this number must be multiplied by a factor 11.5. We expect then the Multi-Blade to show a counting rate capability beyond 10^5 Hz/mm^2. However, the feature of the primary charge has been omitted, thus experimental evidence is needed.
Note that the ion mobility differes for various gas types and it is generally constant with respect to the reduced electric field (E/P) <cit.>. A larger pressure can be compensated by a larger electric field for a given geometry <cit.> . The large ions collection time is the main reason for the space charge effect and it can be of the order of ms. Note that a thicker wire can be used to speed up the ion collection time at the cathode by increasing the drift field while keeping the gas gain fixed, with the extra advantage of a more robust mechanical structure. One way to lower the space charge effect is to decrease the gas gain as much as possible (keeping a reasonable signal-to-noise ratio for the electronics) in order to diminish the total charge to evacuate per neutron converted.
§.§.§ Amplification alternatives
In some cases the detector is operated in a vacuum tank, a requirement set by the instrument to avoid the scattering of the incoming neutrons. Since the Multi-Blade detector is operated at atmospheric pressure with a continuous gas flow, cost-effective materials can be used in the detector and the differential pressure on the detector entrance window is much smaller than that of ^3He-based detectors, as ^3He detectors are usually filled with a few bars of gas. This implies that the thickness of the window of the Multi-Blade can be significantly reduced. In the event of atmospheric pressure in the neutron flight path, this means that a thin Al-foil can be used as the window.
Replacing the wires with other readout systems is worth considering. It has been shown that Micro Pattern Gaseous Detectors (MPGD) can reach higher counting rate capability and good spatial resolution <cit.>. It has been shown that Gas Electron Multipliers-based (GEM) neutron detectors can reach a few tens of MHz/cm^2 <cit.>. However, the inclined geometry of the Multi-Blade detector makes the choice of wires very natural. If for example we consider Gas Electron Multipliers (GEMs) <cit.>, those are made of kapton and even if they can be manufactured in very thin foils the inclination increases the effective thickness and results in excessively high scattering.
In other applications <cit.> where these layers are crossed perpendicularly by neutrons they represent a valid technology in neutron detection.
Other possible MPGD choices are the Micro Strip Gas Chambers (MSGC) <cit.> ot the Micro-Mesh Gaseous Structure (MicroMegas) <cit.>, both of which present challenges when operating in this specific geometry of closely stacked cassettes. Moreover, MicroMegas employ PCBs which have scattering challenges for operation at an angle.
Moreover, the ionization chamber mode of operation (a gas gain of 1) of the wire chamber of the Multi-Blade is a possibility to reduce the space charge effect, but it is challenging for noise suppression.
§.§.§ Signal formation
The Multi-Blade MWPC aims to work at a gas gain of 15 and this is only possible if the anode wires and cathode strips are readout individually. This scheme reaches a larger signal-to-noise ratio with respect to the charge division readout <cit.>. In general, ^3He-based detectors use this readout and the secondary charge to evacuate is generally much larger: about 3 fC (20000 pairs, E=770KeV, w=37eV <cit.>) are created per neutron conversion and a gas gain of a few hundreds is needed to reach 1-2 pC charge to get the required position resolution. Often they are operated in the region of limited proportionality in which the response is greatly affected by the space charge of a large amount of positive ions created in the avalanche; the effect of self-induced space charge <cit.>. Note that the improvement in counting rate capability it is mainly connected the detector parameters chosen, including the readout scheme and only marginally to the isotope used to convert neutrons.
At high rate operation, the individual readout (as opposed to charge division) is mandatory to disentangle hits occurring nearly at the same time (that is, unresolved due to the finite time resolution of the detector). The measured amplitudes on the wires and on the strips are strongly correlated (since they couple to the same avalanche), therefore with sufficient dynamic range, the ambiguity can be resolved most of the time by requiring matching amplitudes.
§ SIMULATIONS
§.§ Cassette arrangement simulation
The Multi-Blade is a modular detector, a significant concern in a modular design is the uniformity of the response. Several effects might contribute to degrade the uniformity and they have to be taken into account in the detector concept. In particular the substrate flatness is an important aspect to keep the electric field uniform in the MWPC, however from the point of view of efficiency the overlap between different cassettes requires a carefully study. The cassettes must be arranged over a circle around the sample to provide a uniform response; the precision of the arrangement and the mechanical tolerances are the crucial aspects to consider from the mechanical point of view.
In the current detector the coated area of a substrate has been fixed to X × Y=130mm·140mm, thus at 5 degrees each blade subtends an area of about 11.3mm (=130mm ·sin(5^∘)) × 140mm. It is possible to increase the vertical dimension of the blade to match the area coverage required by the instruments, subject to mechanical considerations. For FREIA the Multi-Blade will have 300 mm-long blades placed horizontally and for ESTIA the cassettes will be placed vertically and they will be 250 mm high. The number of blades needed to cover the other dimension is about one per cm.
The sample-to-detector distance is fixed for both reflectometers to 3 m and 4 m (Table <ref>); and the neutron detection efficiency for a single ^10B_4C-layer (>3 μ m) inclined at 5 degrees is about 44% at 2.5 Å <cit.>. We consider the sample to be a point-like source of neutrons. The specular incidence (and reflected) angle never exceeds a few degrees (i.e. 3 degrees) and the sample surface has a physical extension of 80 mm at most, then the projected sample dimension toward the detector is only about 4 mm for a 3 m distance. The difference between the angle of incidence of a neutron impinging the beginning and at the end of a 130 mm-blade at a distance of 3 m is comprised within 0.2 degrees. The efficiency then only varies within 1% due to the change in angle, while a wider substrate will cause a larger efficiency variation. Figure <ref> (left) shows the calculated efficiency <cit.> for 10 cassettes arranged circularly around a center of scattering, i.e. the sample, at 3 m distance. Each cassette shows the expected variation of efficiency within 1%. On the other hand, if the cassettes are stacked parallel to each other (right plot in Figure <ref>), the variation of the efficiency is still about 1% within each cassette but it varies dramatically from one to another. This variation is about 10% from the first to the last unit. It is then crucial to arrange the cassettes over a circle around the sample in order to keep the variation of the efficiency within 1%.
The cassettes are arranged to have some overlap; i.e. each blade makes a shadow over the adjacent in order to avoid dead areas. In this set-up approximately 15 mm of the active region of each cassette is shadowed by the adjacent unit, which corresponds to 1.3 mm at 5 degrees. Therefore, if each cassette subtends an area of 11.3 mm × 140 mm, this area is reduced to 10 mm× 140 mm due to the shadowing effect. Referring to Figure <ref>, the overlap between units is shown in red. This function assumes only integer values according to the number of blades crossed by a neutron path from the sample to the detector. The efficiency is only plotted for one or no converters crossed and any further layer above one does not contribute to the efficiency because the layers is supposed to absorb all the neutrons.
A more extended substrate may help to relax the requirement on the overall thickness of a cassette, presently 10 mm. A blade of X=130 mm represents already a challenge in terms of mechanical available space. Increasing X results in a reduced number of cassettes to cover a given area, but a too large extension of the blade will degrade the uniformity of the detector because of the variation of the incidence angle.
A misalignment of the detector angle affects overall the response more than a misalignment in the distance to the sample. E.g. a variation of the arrangement angle of ±0.5 degrees causes a variation of the efficiency of about 4% whereas a mispositioned detector of ± 0.2m only causes a variation in efficiency of approximately 2%.
§.§ Electric field simulation
In the current detector the units are arranged over a circle of 3 m radius and the relative angle between two adjacent cassettes is 0.19 degrees. The wire plane and the converter (^10B_4C-layer) are parallel and at 4.6 mm distance. They physically belong to the same cassette. The strip plane of each MWPC belongs instead to the adjacent unit which is positioned with +0.19 degrees angle, i.e. the strip plane is not parallel to the wire and converter planes. This geometry affects the electric field and it is the price one has to pay to remove any material from the path of the incoming neutrons that can cause scattering. The wire pitch is 4 mm and they have a diameter of 15 μ m. The gas mixture is Ar/CO_2 (80/20) continuously flushed at atmospheric pressure. The wires are labelled from 1 to 32 as shown in Figure <ref>.
Figure <ref> (left) shows a sketch of the simulated process of a neutron converted by the boron-layer and a charged particle escaping from the layer generating electrons along the trajectory and drifting according to the electric field lines. For each electron we calculate the integral of the Townsend coefficient over the drift path. This represents the mean number of secondary electrons produced along the trajectory of the electron that is drifting, assuming exponential avalanche fluctuations and neglecting attachment. The gas gain given per wire is shown on the right in Figure <ref> for several voltages applied to the wires. At a voltage of 800 V we expect a gas gain of 15. The gas gain varies a lot along the first 7 wires due to the increase in the cathode gap while moving toward the edge of the cassette. There is a slight reduction of the gain from wire 8 until the end of the units and it is due to the relative angle between the cassettes (0.19 degrees). The variation in gain is smaller for lower voltages.
Figure <ref> (left) shows the gas gain given per wire for a voltage of 800 V; the converter and the opposite strip plane are kept fixed while the wire plane is shifted orthogonally to the converter plane. A positive shift means that the wire plane and the strips are closer, while a negative shift indicates that the wire plane is moving closer to the converter. On the right, in Figure <ref>, the wire plane and converter are kept at a fixed position and the strip plane is shifting. The first set-up corresponds to a misalignment of the wire plane within a single cassette, whereas the second set-up corresponds to a misalignment in the arrangement of the cassettes. We can conclude that a misalignment in the wire plane positioning is less crucial than a misalignment in the cassette arrangement. A 0.5 mm deviation for the wire positioning can be tolerated, and corrected by threshold adjustment; the constraint is much more strict for the arrangement of the cassettes.
As a possible solution, in order to compensate the reduction in gain and to improve the dynamic range, the wire thicknesses or pitch for the first 7 wires can be adjusted accordingly to the gain variation. Otherwise the gain drop at the first 7 wires can be compensated with a separate high voltage supply or by adjusting individual threshold, in hardware or software, on each channel.
§ EXPERIMENTAL SETUP
The tests were performed at the triple axis spectrometer ATHOS at BNC <cit.> (Budapest - Hungary). The preliminary test of the Multi-Blade and the γ-sensitivity measurements were performed at the Source Testing Facility (STF) <cit.> at the Lund University (Sweden).
The demonstrator is made up of 9 cassettes, each one equipped with 32 wires and 32 strips. The readout of those channels is performed depending on the type of pre-amplifier board is plugged in. Two possible readouts are possible: a charge division readout (as it was for the previous demonstrator <cit.>, Multi-Blade 2013) that allows to reduce the number of channels to 4 per cassette and an individual readout option with 64 channels per cassette (one amplifier for each wire or strip). In the tests one cassette was equipped with an individual readout (64 channels) and the remaining 8 cassettes with the charge division readout, and thus 32 channels in total. The presented configuration allowed to keep the number of readout channels reasonably low by giving the possibility to test a complete detector. The detector has been tested at 800 V for the individual readout (gas gain ≈ 15) whereas a voltage of 1300 V (gas gain ≈ 2000) has been used for the charge division readout in order to get a suitable signal-to-noise ratio and the corresponding spatial resolution. In the final configuration for a high rate detector, the gas gain must be kept as low as possible in order to reduce the space charge effects and to exploit the maximum counting rate capability, thus the individual readout is the sole option. Figure <ref> shows a sketch of the readout electronics chain.
The pre-amplifiers are CREMAT <cit.> CR-110 charge sensitive pre-amplifiers and are placed inside the gas vessel; any signal can be optionally shaped with a CREMAT <cit.> CR-200-500ns Gaussian shaping amplifier or directly sent to the acquisition system. This means that we may choose to record a pre-amplifier signal or a shaped signal. The pre-amplifiers have a gain of 1.4 V/pC and and the shaping amplifiers a gain of 10 with 500 ns shaping time. A CAEN <cit.> Digitizer is used to record the traces (Mod. DT5740, 12 bit, 62.5 MS/s).
In order to quantify the actual counting rate capability of the detector itself, for the specific counting rate measurement we use a discriminator and a scaler bypassing the digitizer. The scaler was a CAEN <cit.> V830 with maximum counting rate of 250 MHz and the constant fraction discriminator was a CAEN <cit.> V812.
§ RESULTS
§.§ Signals, PHS and gas gain
When a neutron is converted, the capture reaction fragments can escape the layer with any orientation and due to the random energy loss, the deposited charge in the gas volume ranges from zero to the full energy carried by the particle. The alpha particle (1470 KeV) maximum range in Ar/CO_2 at atmospheric pressure is about 8 mm, and this corresponds to twice the wire (strip) pitch. Figure <ref> shows the signals (after the shaper) collected at four adjacent wires. These wires are in the center of the cassette (wires from 13 to 16 out of 32) read out with individual amplifiers. In both plots the full energy carried by the particle is the same and corresponds to ≈ 1000 in ADC levels. In the plot on the left the total charge is collected by one wire whereas in the plot on the right it is collected by two wires demonstrating the orientation effect. Moreover, when the ion charge created in the avalanche close to the wires travels toward the cathodes it induces a signal on the involved wire and a negative pulse is also induced on the neighbouring wires not directly involved in the process.
Due to the geometry of the MWPC of the cassette about 70% of the times a single wire is involved in the detection process, about 30% two wires are firing; the probability to get three or more wires involved in a detection process is below 1%.
The capacitive signal coupling towards the strips is different from that of wires and it implies that most of the time two strips are involved in a detection process. About 25% of the times only one strip or three strips are involved and 50% of the times two strips are firing, and the probability to get four or more strips firing is below 1%.
Figure <ref> shows the measured PHS at 800 V on an individual wire and its neighbours. The black curve is obtained by adding together all the events that are collected at that wire (when a single wire is fired for each neutron event); the green curve is obtained by summing the amplitudes of that wire and the right or left neighbour, i.e. events that involve two wires. The red curve is the sum of the two measured PHS and the blue curve is the calcualted PHS obtained as shown in <cit.> with an energy resolution of 50 KeV and an energy threshold of 100 KeV.
Based on the feature of the PHS, knowing the amplifier gain (14 V/pC) and that the alpha particle generates at most on average 9 fC of primary charge, we extrapolate a gas gain of (20±3).
From the simulations in Section <ref> we expect the gas gain to be approximately 15 and that a deviation of 0.5 mm in the positioning of the cathode plate produces an error of ± 3 on the gas gain. This is compatible with the experimental evidences discussed in this section [As already mentioned in Section <ref> in the present demonstrator the converter substrates are made of aluminium which suffers from deformation in the coating process.].
§.§ Efficiency
It has been shown in <cit.> that the theoretical efficiency can be reached at an inclination of 10 degrees of the converter layer but it was not possible to prove this agreement at 5 degrees. Moreover, it has been shown in <cit.> that the inclination of 5 degrees does not affect the neutron detection efficiency due to neutron reflection at the layer surface. The neutron detection efficiency of the Multi-Blade detector at 5 degrees has been measured.
The neutron beam was collimated with two collimation slits corresponding to a 2×10 mm^2 footprint on the detector and a PHS was recorded for 200 s. The integral of the PHS above the threshold and normalized to the incoming flux gives the efficiency. The measurement was repeated to calibrate the neutron flux with a ^3He-based detector filled with 8 bar ^3He and 1 bar CF_4 which is 3 cm thick. We assume this detector to be fully efficient at the wavelength we measure the flux and that between the measurement with the Multi-Blade and the ^3He detector the reactor does not fluctuate. This measurement was performed for two neutron wavelengths: 4.2 and 5.1Å, resulting in ( 56 ± 2)% and ( 65 ± 2)% respectively.
Figure <ref> shows the measured and calculated efficiencies according to <cit.> with a 100 KeV energy threshold.
§.§ Overlap and Uniformity
The neutron beam was collimated by using two collimation slits to a about 0.2×10 mm^2. The detector was scanned across the wires of two adjacent cassettes read out by charge division with a step of 0.25 mm. A PHS was recorded for each cassette and for each position of the scan. The number of counts in the PHS is plotted in Figure <ref> as a function of the position and normalized to the average counts, i.e. the relative efficiency is plotted as a function of the position.
The zero position is aligned with the edge of the cassette 2. As shown in Section <ref> we expect the gas gain to vary in the first part of the cassette due to the geometry, this is shown by the blue experimental points that have been obtained by keeping the threshold fixed for any position of the scan. If the threshold is changed according to the change in gain we observe in the PHS, we obtain the green curve. At the cassette edge, in a range of 0.75 mm the detector is fully efficient and two experimental points lay in this distance. The relative efficiency drops to 80% and to slightly less than 50% for these two points. The edges of the two binned points result in a total gap of 0.5 mm. In the prototype presented in <cit.>, the Multi-Blade 2013, this gap was about 2 mm.
This measurement was performed at 1300 V, suitable for the charge division readout, and the overall variation of gain across a cassette is within 20%, the measurement was repeated on another cassette and the same variation was found. We also repeated this scan over the single cassette equipped with the individual readout and we found a variation within 10%. We expected the uniformity to improve for a lower gas gain (used in the individual readout) because the electric field is weaker and it tolerates larger mechanical imperfection.
§.§ Spatial Resolution
The spatial resolution of the detector was measured in both directions by scanning the surface with a collimated beam of about 0.2×10 mm^2 size. The definition adopted for the resolution across the wires is based on the Shannon information theory and given in <cit.> was already used in <cit.>. Note that the resolution of this detector is not primarly determined only by the wire or strip pitch since the track length of the neutron capture reaction fragments is comparable to the wire/strip pitch. In <cit.> was shown that the resolution on wires is not a single wire pitch neither two, but rather something in between.
The measurement was performed by using both the charge division readout and the individual readout. The position of an event can be reconstructed by assigning triggered event to the wire or strip with with the largest amplitude (ADC level) regardless of how many wires or strip are involved in the detection process. Otherwise a Center of Gravity (CoG) algorithm can be used; it weights the different amplitudes of neighbour wires (or strips) to get a finer spatial resolution. With the charge division readout the CoG reconstruction is performed by default and the only way to compare the two algorithms is with the individual readout.
By using the largest amplitude reconstruction algorithm we get 0.59 mm spatial resolution across the wires (X) and if we use the CoG algorithm (for both individual and charge division readouts) the resolution is modestly improved and it is 0.54 mm according to the definition given in <cit.> (see Figure <ref>).
From the scan on the strips the resolution is approximately the strip pitch if the largest amplitude algorithm is used (a similar result was already found in <cit.>); but it can be improved to 2.5 mm by using the CoG algorithm (see Figure <ref>). The fact that the resolution across the strips improves to a larger extent with respect to that of wires, is due to the fact that the probability for an event to involve more than one strip is larger for strip than that of wires (see Section <ref>).
§.§ Stability
The Multi-Blade was flushed with Ar/CO_2 (80/20, fractions by volume) and a full volume (≈ 30 l) of the detector was renewed every day (24h). A PHS was recorded every 10 minutes during a night with the neutron collimated beam impinging on the cassette read out with individual amplifiers. A variation of counts (for a fixed threshold) within 1% was found over 12 h. The measurement was repeated and it gave similar results.
We expect the variation in counts to follow the atmospheric pressure variation if either the HV or threshold or the pressure in the vessel are not adjusted accordingly; however a stability within 1% is sufficient to operate the detector for most of the neutron reflectometry measurements.
§.§ Counting rate capability and shaping time
For the measurement of the counting rate capability of the detector, the cassette equipped with individual amplifiers was used. Since we want to quantify the sole capability of the Multi-Blade without being limited by the DAQ, the front-end electronics is directly coupled to a constant fraction discriminator and a scaler (see Figure <ref>).
The Multi-Blade was placed in the focal point of the monochromator in order to get the highest flux possible and the neutron beam was collimated on the cassette of interest by using two collimation slits. Many layers of a scattering material (A4 printer sheets without ink) were placed before the first collimation slit and they were progressively removed increase the neutron flux on the detector, up to the full transmission of the beam. Figure <ref> shows the count rate for a single channel of the Multi-Blade as a function of an increasing beam intensity; a beam transmission of 1 corresponds to a full transmission of the beam without any scatterer. Each point is obtained with a different amount of scattering material. The measurement is performed for a collimation in a spot to probe the local counting rate capability and in a vertical slit to probe the rate capability for a channel. The plot also shows the residuals of the measured point and the linear trend.
A deviation from the linearity in the trend would suggest a space charge accumulation. As expected, we can clearly see that there is no sign of saturation up to the maximum measured rates which are (1586±7)Hz/mm^2 locally and (16590±70)Hz per channel. Further tests are needed to prove the limit of the Multi-Blade and a more intense source must be involved.
The cassette with individual readout is then connected to the digitiser and the shaper board is omitted (see Figure <ref>). We thus access the pre-amplifier signals directly in order to quantify the minimum shaping time needed for the signals. An example of the signals is shown in Figure <ref> (left) and their first derivative is also shown to highlight their fast component. The distribution of the fast component of the signals, i.e. the rise time, is shown in the plot on the right in Figure <ref>. The fast rise of the signals is due to the motion of the ions generated in the avalanche process that are travelling toward the cathodes. Although the ion collection time can be of the order of a few hundred microseconds, the pulse formation is generally 10^-3 faster <cit.>. The ions generate the most of the signal while they are moving in the higher intensity electric field region; i.e. close to the wire. For the Multi-Blade this fast component is at most 300 ns.
With a shaping time of 500 ns a post shaping signal of about 1.5 μ s can be obtained. For pile-up rejection purposes it corresponds to rates up to about 600 KHz per channel.
§.§ Images
A spread beam diffused by using a polyethylene scatterer was centred on the detector entrance window and a mask shown in Figure <ref> was placed on it to obtain an image. The reconstruction was obtained with the 8 cassettes read out with charge division. Figure <ref> also shows the reconstructed histograms, i.e. images, for the two masks used.
§.§ Gamma sensitivity
The gamma sensitivity of a neutron detector is defined as the probability for a γ-ray to generate a false count in a neutron measurement. The γ-ray sensitivity defines the best achievable signal-to-noise ratio, as in neutron scattering experiments the detectors are typically exposed to a large amount of γ-rays. It has been demonstrated that detector based on thin converter films, such as ^10B_4C layers, have equal γ-ray rejection to that of ^3He tubes. An extended study of the γ-ray sensitivity of neutron detectors based on thin converter films can be found in <cit.>.
To summarize, low energy photons (a few tens of KeV) generally transfer all their energy to an electron in a photoelectric interaction and they mainly interact with the gas of the detector, whereas medium energy photons (a few hundreds of KeV) Compton scatter losing only part of their energy and they mainly interact with the solid, i.e. the Aluminium as the mechanics. The energy for which the two contributions are of equal order of magnitude is 150 KeV for Ar/CO_2 at 1 bar and Aluminium. Interaction with the gas are guaranteed to generate a signal while those in the solid only result in a signal if the electron reaches the gas. The deposited energy in the counting gas rarely exceeds a few tens of KeV regardless of the energy of the photon; an electron of higher energy has a lower energy loss per unit track length. Only for several MeV the measured energy increases. A low gas pressure is preferable for neutron detectors; to reduce the window thickness and to decrease the number of γ-ray converted.
For a gaseous detector the range of ions in the counting gas generated by the neutron capture is typically of the order of a few mm, while the range of electrons is much larger for the same energy. In detectors based on ^10B_4C layers the ions from the neutron capture reaction can reach the gas after losing an arbitrary amount of energy while still in the layer; thus an arbitrarily low amount of energy can be deposited in the gas volume, therefore mixing with the photon spectrum. The neutron events that deposit an energy that can also be deposited by a γ-ray must be rejected.
The Multi-Blade response to reference γ-ray sources with known activity has been measured. The thresholds (hardware threshold) were set so that the electronic noise was rejected. Three sources were used and they are listed in Table <ref> with the main γ-rays emitted and their intensities. The activity at the time of the measurement is also shown. The ^57Co and ^133Ba can be considered as low energy sources whereas the ^60Co emits only photons above 1 MeV.
The γ-ray efficiency (or sensitivity) is defined as the probability for a photon incident on a detector element (a cassette or a single wire/strip) to result in an event. Practically the number of events that exceed a set threshold is normalized to the activity of the source and the solid angle. We define the flux incident on a detector element as all the photons emitted into the solid angle of that detector element.
Each source was placed close to the entrance window of the Multi-Blade, directly outside of the cassette equipped with the individual readout electronics; it results in a solid angle acceptance of approximately 0.2 sr.
A PHS for the three sources was recorded, and in absence of any source to estimate the background. The latter is the sum of the environmental background and the false counts that arises from the natural decay of radioactive contaminants typically present in standard Aluminium alloys <cit.>. In order to the get the PHS we sum all the counts recorded with the wires from 10 to 30 of the cassette in order to increase the statistics and to take into account only wires with approximately the same gas gain. Note that the wires from 1 to 9 have been excluded because of the strong gain variation (Section <ref>). The Multi-Blade is operated at 800 V. For each event the sum of all the wires involved in the detection process is shown and the event is associated to the wire with the largest amplitude, as for the neutron PHS in red in Figure <ref>; the latter is also reported in Figure <ref> (blue curve) for comparison. The alpha peak of the 94% branching ratio of the neutron capture reaction (corresponding to 1470 KeV) is used to convert the PHS X-axis from ADC levels to energy. Figure <ref> (left) shows the PHS recorded for each source and normalized in time, but not in solid angle and activity. Hence the ^60Co PHS shows a larger amount of counts because of the larger activity.
The average background rate in a single wire of the cassette used is approximately 7.8·10^-3 Hz if only the hardware threshold is applied just above the electronic noise, if a threshold of 100 KeV is applied the total background rate in the PHS are 2.6·10^-4 Hz. If a source is placed in front of the detector the rate in the PHS are 0.65 Hz, 0.75 Hz and 3.38 Hz for ^57Co, ^133Ba and ^60Co respectively. If a 100 KeV energy is applied they decrease to 1.2·10^-3 Hz, 1.3·10^-3 Hz and 1.9·10^-3 Hz respectively.
Figure <ref> (right) shows the total counts in the PHS which is normalized to the activity of the source and the solid angle as a function of the threshold. Note that the statical errors are reported in the plot ,but the greatest uncertainty in this measurement is the measurement of the distance from the source to the sensitive element. We estimate that this can lead to the uncertainty of no more than a factor 2. The encapsulation of the reference sources is the same and they were placed in the same position in front of the detector. Any error due to the distance uncertainty is the same for each point of the measurement, thus it does not impact the shape of the curves but instead results into an overall normalization shift.
It has been found that for any of the sources used a threshold of 100 KeV lead to a γ-ray sensitivity (efficiency) below 10^-7 per wire; whereas the neutron efficiency (at 4.2Å) for the same threshold is ( 56 ± 2)% (Section <ref>). Intuitively, since the threshold are set individually on each wire (or strip), any electron that travels a long distance in the cassette will spread its energy among many wires (or strip); i.e. the energy collected per wire (or strip) is a small quantity of the full deposited energy.
§ CONCLUSIONS AND OUTLOOK
The Table <ref> summarizes the results of the test performed with the improved Multi-Blade detector (Multi-Blade 2016) and the results that had been obtained with the first demonstrator <cit.> (Multi-Blade 2013).
The Multi-Blade is a promising alternative to ^3He detectors for neutron reflectometry instruments at ESS and at other neutron scattering facilities in the world. The design of the Multi-Blade has been improved with respect to the past <cit.> to meet the requirements for the ESS reflectometers. A demonstrator has been assembled at ESS and tested at the Budapest reactor (BNC <cit.>) and at the Source Testing Facility (STF) <cit.> at the Lund University (Sweden).
The amount of material that can cause neutron scattering, and thus misaddressed events in the detector, has been completely removed when comparing to the previous prototype <cit.>. The ^10B_4C-layer in the current detector serves as a shielding for the readout system, thus it can not be reached at all by neutrons.
The detector is operated at atmospheric pressure with a continuous gas flow. Cost-effective materials can be employed because out-gassing is not an issue. Moreover, because of the lower pressure with respect to ^3He high pressure vessel, a thinner detector entrance window can be used resulting in a lower scattering. The detector was stable for 12h within 1% variation. Longer stability tests are needed in the future.
The efficiency has been measured at an inclination of 5 degrees to ≈ 56% at 4.2Å. The measured efficiency meets the requirements as well as the theoretical calculations; it is ≈ 44% at 2.5Å, the shortest neutron wavelength used in reflectometry at ESS.
The high spatial resolution below 0.5 mm has already been shown in the past <cit.>. We found 0.5 mm × 2.5 mm with the new detector. The spatial resolution depends on the algorithm that is used to reconstruct the event position and, due to the different geometry of the electric field of strips with respect to wires, the algorithm used affects to a larger extent the spatial resolution given by the cathode strips than that of the anode wires. The spatial resolution of the Multi-Blade is beyond of what can be achieved with the the state-of-the-art technologies in currents facilities at present.
Previously, a relative efficiency drop of ≈ 50% was found in the previous detector within a gap of 2 mm between the cassettes <cit.>. However, the here presented improved detector geometry with a gap of 0.5 mm resulted in approximately the same relative drop in efficiency.
It has been shown that this detector can be efficiently operated at a very low gas gain (≈ 20), a feature that is needed to reduce the space charge effects and improve the counting rate capability.
A uniformity within 20% was found if the detector is operated at higher gain, which is needed for the charge division readout, it improves to 10%, as the electric field decreases with the individual readout which requires a lower gas gain. The current detector employs 2 mm-thick Al-substrates on which the ^10B_4C-layer is deposited, it has been already observed a deviation form planarity due to the residual stress after the deposition process. These Al-blades are replaced with Ti-blades in the next prototype. We have observed that the Ti-substrates do not show any visible curvature after deposition and therefore improve the detector uniformity.
At the price of a more complex electric field geometry the strip cathode is shielded by the thick coating and thus prevented from causing scattering and misaddressed events in the detector. Measurements and simulations show a drop in gas gain at the front of each cassette; i.e. at the first 7 frontal wires. There are mainly three ways to correct this drop in gas gain and they will be investigated in the next Multi-Blade generation. The wire thicknesses for the first 7 wires can be changed according to the gain drop in order to compensate the reduction, otherwise the gain drop can be compensated with a separate high voltage supply or by adjusting individual threshold on each channel.
The counting rate capability of the detector has been measured up to the highest rate available at the beam line. No saturation has been observed up to (1586±7)Hz/mm^2 locally and (16590±70)Hz per single channel.
The γ-ray sensitivity of the Multi-Blade has been measured for three γ-ray sources (^57Co, ^133Ba and ^60Co) and it was found that for an energy threshold of 100 KeV, which leads to the full neutron efficiency, it is below 10^-7 for any photon between a few KeV to approximately a MeV.
We are assembling a Multi-Blade detector which employs Ti-blades to get to the required electric field uniformity and it will be equipped only with individual readout over 576 channels. We intend to perform the standard detector characterization measurements and to take this detector to an existing reflectometer to measure some reflectometry standard samples in a real environment and compare the results with the state-of-the art technology.
This work is being supported by the BrightnESS project, Work Package (WP) 4.2 (Horizon 2020, INFRADEV-3-2015, 676548) and carried out as a part of the collaboration between the European Spallation Source (ESS - Sweden), the Lund University (LU - Sweden), the Linköping University (LiU - Sweden) and the Wigner Research Centre for Physics (Hungary).
The work was supported by the Momentum Programme of the Hungarian Academy of Sciences under grant no. LP2013-60.
The work was carried out in part at the Source Testing Facility, Lund University (LU - Sweden).
The work originally started in the context of the collaboration between the Institut Laue-Langevin (ILL - France), the Linköping University (LiU - Sweden) and the European Spallation Source (ESS - Sweden) within the context of the International Collaboration on the development of Neutron Detectors (www.icnd.org).
ieeetr
|
http://arxiv.org/abs/1701.07476v2 | 20170125203358 | The Smirnov class for spaces with the complete Pick property | [
"Alexandru Aleman",
"Michael Hartz",
"John E. McCarthy",
"Stefan Richter"
] | math.FA | [
"math.FA",
"math.CV",
"46E22 (Primary), 30H15, 30H80 (Secondary)"
] |
We show that every function in a reproducing kernel Hilbert space with a normalized
complete Pick kernel is the quotient of a multiplier and a cyclic multiplier.
This extends a theorem of Alpay, Bolotnikov and Kaptanoğlu.
We explore various consequences of this result regarding zero sets, spaces on compact sets and
Gleason parts. In particular, using a construction of Salas, we exhibit a
rotationally invariant complete Pick space of analytic functions on the unit disc
for which the corona theorem fails.
Ballistic, diffusive, and arrested transport in disordered momentum-space lattices
Bryce Gadway
December 30, 2023
==================================================================================
§ INTRODUCTION
The Smirnov class in the unit disc can be characterized as the space
N^+ = {φ/ψ : φ, ψ∈ H^∞, ψ outer},
where H^∞ denotes the algebra of bounded analytic functions on .
Alternatively, it is the space of analytic functions on for which the functions
θ↦log ( 1+ | f(re^iθ) | )
are uniformly integrable over [0,2 π] for all 0 < r < 1.
The class occurs frequently in function theory <cit.>, and has interesting connections to operator theory — see, e.g. <cit.>. It contains H^p for all p > 0.
The algebra H^∞ is the multiplier algebra of the Hardy space H^2,
which is the Hilbert function space on which has the kernel
s(z,w) = 1/1- z w
as its reproducing kernel.
Many interesting properties of H^∞ and H^2 can be shown to follow from the fact that the kernel is a complete Pick kernel — see the book <cit.> for a comprehensive account on this topic.
We shall give a definition of complete Pick spaces in Section <ref> below. For now,
we simply mention that in addition to the Hardy space H^2, the Dirichlet space and the
Drury-Arveson space, which has been studied by many different authors over the years
<cit.>,
satisfy this property.
Suppose that is a Hilbert
function space on a set X whose kernel k is normalized at x_0 (this means that k(x,x_0) = 1 for every x).
Let () denote the multiplier algebra of . A multiplier ψ is said to be cyclic if ψ
is dense in .
We define
N^+() = {φ/ψ: φ, ψ∈() and ψ is cyclic}.
(Taking = H^2, we recover the classical Smirnov class.)
Observe that a cyclic multiplier does not vanish anywhere, so the quotient φ/ψ is defined on all of X.
Moreover, since the product of any two cyclic multipliers is cyclic (see Subsection <ref>),
N^+() is an algebra.
Using the inner-outer factorization of functions in H^2, one can show that H^2 ⊂ N^+.
On the other hand, there are functions in the Bergman space L^2_a
on which do not possess boundary radial
limits anywhere on ∂, and hence are not
quotients of two functions in H^∞ = (L^2_a).
Thus, L^2_a is not contained in N^+(L^2_a).
The following result shows that the inclusion ⊂ N^+() is valid
for every complete Pick space with a normalized kernel.
It was shown in the case of the Drury-Arveson space on a finite dimensional ball
by Alpay, Bolotnikov and Kaptanoğlu
<cit.>, but their proof can be extended to general complete Pick spaces.
Let be a
complete Pick space on X whose kernel is normalized at x_0 ∈ X and let f ∈ with ||f||_≤ 1.
Then
there are φ,ψ∈() of multiplier norm at most 1 with ψ(x_0) = 0
such that
f = φ/1 - ψ.
In particular, ⊂ N^+().
We will provide a proof of Theorem <ref> in Section <ref>.
This theorem applies in particular to the Dirichlet space,
which answers a question posed in <cit.> and at the end of <cit.>.
The majority of this note is devoted to exploring consequences of this theorem.
Our first application concerns zero sets.
If S is a class of functions on a set X, then a subset Z of X is called
a zero set for S if there exists a function in S that vanishes exactly on Z.
It is well known that the zero sets for
H^2 and for H^∞ agree and are precisely the Blaschke
sequences, along with the entire disc (see <cit.>).
Zero sets for functions in L^2_a can be much more complicated, for instance,
the union of two zero sets for L^2_a need not be a zero set for L^2_a <cit.>.
On the other hand, it was shown by Marshall and Sundberg <cit.>, see also <cit.>,
that the zero sets for functions and for multipliers in the Dirichlet space agree.
We show that this fact extends to all complete Pick spaces, thereby providing a positive answer to
<cit.> in the case of complete Pick kernels.
The case of Pick kernels which are not necessarily complete Pick kernels remains open.
We say that a reproducing kernel Hilbert space on X is normalized if its kernel is normalized
at some point in X.
Let be a normalized complete Pick space. Then the zero sets for , ()
and N^+() agree. In particular, the union of two zero sets for is a zero set for .
If is a Hilbert function space on X, then it may happen that
all functions in extend uniquely to a larger set.
For instance, we could start with a space of analytic functions on ,
let X = 1/2 and let be the restriction of to X,
so that every function in extends uniquely to an analytic function on .
The notion of partially multiplicative functional (or generalized kernel function) allows
one to find the largest possible domain of definition for the functions in . A non-zero
bounded functional ρ on is said to be partially multiplicative
if ρ(f g) = ρ(f) ρ(g) whenever f,g ∈ such that f g ∈.
Clearly, evaluation at each point of X gives rise to a partially multiplicative
functional on . We say that X is a maximal domain for if conversely, every partially
multiplicative functional is given by evaluation at a point in X (this is called is algebraically
consistent by Cowen and MacCluer, see <cit.>).
It was shown in <cit.> that every Hilbert function space can be considered as a space of functions on the
set of partially multiplicative
functionals, and this set is a maximal domain for .
For more discussion, the reader is referred to
<cit.>, <cit.> and <cit.>.
In the context of normalized complete Pick spaces, there is another notion of partially multiplicative functional,
and we show that the two notions agree. This answers a question that was left open in <cit.>.
Let be a normalized complete Pick space and let ρ be a bounded non-zero functional on .
Then the following are equivalent.
* ρ(φ g) = ρ(φ) ρ(g) for all φ∈() and all g ∈.
* ρ(f g) = ρ(f) ρ(g) whenever f,g ∈ such that f g ∈.
A classical notion in uniform algebras is that of a Gleason part in the maximal ideal space,
see <cit.> and <cit.>. For instance, Gleason parts play an important
role in the study of the maximal ideal space of H^∞, see for example <cit.>.
This notion can be generalized to multiplier algebras.
Given two characters ρ_1,ρ_2 on a multiplier
algebra, we write ρ_1 ∼ρ_2 if ||ρ_1 - ρ_2|| < 2. This defines
an equivalence relation on the maximal ideal space of the multiplier algebra (see Lemma <ref>),
and the equivalence classes are called Gleason parts.
In this context, we prove in Section <ref> the following result.
Let be a normalized complete Pick space on a set X. Then the following are equivalent.
* X is a maximal domain for .
* Every weak-* continuous character on () is given by evaluation at a point in X.
* The characters of evaluation at points in X form a Gleason part in the maximal
ideal space of X.
The equivalence of (i) and (ii) in Proposition <ref> was already observed in <cit.> (using
(i) in Corollary <ref> as the definition of partially multiplicative functional and under the assumption
that separates the points in X).
In Section <ref>
we study complete Pick spaces of continuous functions on compact sets — for example, spaces
of analytic functions on the disc that extend to be continuous on .
In particular, we investigate the validity of a corona theorem in this context.
Carleson's famous corona theorem for H^∞ <cit.> asserts that the open unit disc is dense
in the maximal ideal space of H^∞. This was extended to multiplier algebras
of certain Besov-Sobolev spaces on the unit ball in finite dimensions,
including the multiplier algebra of the Drury-Arveson space, by Costea, Sawyer and Wick <cit.>.
If () consists of continuous functions on a compact set X, then
a corona theorem for () would assert that the maximal ideal space of ()
is equal to the characters of evaluation at points of X.
Let be a
normalized complete Pick space of continuous functions
on a compact set X such that X is a maximal domain for and
such that () separates the points of X. Then the following are equivalent.
* () = as vector spaces.
* The corona theorem holds for (), that is,
the maximal ideal space of ()
is X.
* The one-function corona theorem holds for (), that is, if
φ∈() is non-vanishing, then 1/ φ∈().
As a consequence, using a very interesting example of Salas <cit.>, we exhibit a rotationally invariant
complete Pick space on for which the one-function corona theorem (and hence the full corona theorem) fails.
There exists a complete Pick space on such that
is a maximal domain for with a reproducing kernel
of the form
k(z,w) = ∑_n=0^∞ a_n (z w)^n
and such that the one-function corona theorem for () fails. In particular,
is a proper compact subset of the maximal ideal space of ().
In the terminology of Nikolski, the fact that the one-function corona theorem fails for ()
means that the spectrum of () is not 1-visible, see Definition 0.2.1 and Lemma 0.2.2
in <cit.>. We also remark that an example of a multiplier algebra
of functions on a subset of for which the one-function corona theorem fails was already
obtained by Trent <cit.>, but the multiplier algebra of Theorem <ref> is quite different.
Indeed, Trent remarks that the functions in his multiplier algebra
“have no smoothness properties in general” (see the discussion preceding <cit.>),
whereas the multipliers of Theorem <ref> are analytic functions in the unit disc.
§ PRELIMINARIES
§.§ Kernels, multipliers and normalization
Let be a reproducing kernel Hilbert space on a set X with reproducing kernel k.
For background material on reproducing kernel Hilbert spaces, the reader is referred to
<cit.> and <cit.>. We will assume throughout that k does not vanish on the diagonal.
Let
() = {φ: X →: φ f ∈ for all f ∈}
denote the multiplier algebra of . The closed graph theorem shows that
every φ∈() determines a bounded multiplication operator M_φ on ,
and we set ||φ||_() = ||M_φ||.
We say that k (or ) is normalized at x_0 ∈ X
if k(x,x_0) = 1 for all x ∈ X, and we say that k (or ) is normalized
if it is normalized at some point in X.
If k(z,w) ≠ 0 for all z,w ∈ X, then it is always
possible to normalize the kernel at a point (see <cit.>). This procedure multiplies
all functions in by a fixed non-vanishing function and leaves () unchanged.
If k is normalized, then contains in particular the constant function 1 and
||1||_ = 1, so that
() ⊂, and the inclusion is contractive.
§.§ The Pick property
We say that is a complete Pick space
if for every r ∈ and every finite collection of points z_1,…,z_n ∈ X
and matrices W_1,…,W_n ∈ M_r(),
positivity of the nr × nr-block matrix
[ k(z_i,z_j) (I_^r - W_i W_j^*) ]_i,j=1^n
implies that there exists Φ∈ M_r(()) of norm at most 1 such that
Φ(z_i) = W_i (i=1,…,n).
In this setting, we also say that k is a complete Pick kernel.
If is a normalized complete Pick space, then k(z,w) ≠ 0 for all z,w ∈ X
by <cit.>.
Complete Pick spaces were characterized
by a theorem of Quiggin <cit.> and McCullough <cit.>. We require the following
characterization of Agler and McCarthy <cit.>.
Let be a reproducing kernel Hilbert space on X with kernel k which is normalized at x_0 ∈ X.
Then is a complete Pick space if and only if there exists an auxiliary Hilbert space
and a function b: X → with ||b(w)|| < 1 for all w ∈ X, b(x_0) = 0 and
k(z,w) = 1/1 - ⟨ b(z),b(w) ⟩_.
If is separable, then can be chosen to be separable.
A comprehensive treatment of complete Pick spaces, as well as examples besides the ones already mentioned
in the introduction, can be found in <cit.>.
§.§ Cyclic multipliers
We say that a multiplier φ of a reproducing kernel Hilbert space
is cyclic if
the multiplication operator M_φ on has dense range.
Observe that φ is cyclic if and only if
(M_φ^*) = {0}.
In particular, the product of two cyclic multipliers is cyclic.
We require the following version of the maximum modulus principle
for Hilbert function spaces with non-vanishing kernels.
Let be a reproducing kernel Hilbert space on X with kernel k such that k(z,w) ≠ 0 for all
z,w ∈ X and let φ∈() with ||φ||_()≤ 1.
If there exists z ∈ X with |φ(z)| = 1, then φ is constant.
By multiplying φ by a complex number number of modulus 1, we may assume that φ(z) = 1. Let
w ∈ X be arbitrary. Since ||φ||_()≤ 1, the Pick matrix at {z,w}, which is
[ 0 k(z,w) (1 - φ(w)); k(w,z) ( 1 - φ(w)) k(y,y) ( 1 - |φ(w)|^2) ],
is positive semidefinite. Taking its determinant, we see that
0 ≤ - |k(z,w)|^2 |1 - φ(w)|^2.
Since k(z,w) ≠ 0 by assumption, this inequality implies that φ(w) = 1.
Consequently, φ is identically 1.
The following lemma gives a sufficient condition for cyclicity of multipliers, which is probably known.
Let be a reproducing kernel Hilbert space on X with kernel k such that k(z,w) ≠ 0
for all z,w ∈ X.
Let φ∈() with ||φ||_()≤ 1.
If φ≠ 1, then 1 - φ is cyclic.
It suffices to show that M_1 - φ^* = 1 - M_φ^* is injective.
Thus, let h ∈( 1 - M_φ^*). Then M_φ^* h = h. Since M_φ is a contraction,
it follows that M_φ h = h and hence that
(φ - 1) h = 0.
Since φ≠ 1, Lemma <ref> implies that φ - 1 has no zeros on X. Consequently,
h = 0, so that 1 - φ is cyclic.
§ PROOF OF THEOREM <REF> AND FIRST CONSEQUENCES
We now present the proof of Theorem <ref>. For the convenience of the reader,
we restate the result.
Let be a complete Pick space on X whose kernel is normalized at x_0 ∈ X and let f ∈ with ||f||_≤ 1.
Then
there are φ,ψ∈() of multiplier norm at most 1 with ψ(x_0) = 0
such that
f = φ/1 - ψ.
In particular, ⊂ N^+().
By Theorem <ref>, there exists
a Hilbert space and a function b: X →(,) with b(x_0) = 0 such that
k(z,w) = 1/1- b(z) b(w)^* .
Note that b ∈(⊗, ) has multiplier norm at most 1,
as k(z,w) (1 - b(z) b(w)^*) = 1 is positive definite.
Define Φ: X →( ⊕,) by
Φ(z) = (1, f(z) b(z)).
Then
k(z,w) Φ(z) Φ(w)^* = k(z,w) + f(z) f(w) k(z,w) b(z) b(w)^*
= k(z,w) + f(z) f(w) k(z,w) - f(z) f(w)
and consequently
k(z,w) ( Φ(z) Φ(w)^* - f(z) f(w)) = k(z,w) - f(z) f(w).
Since ||f||_≤ 1 this function is positive definite (see <cit.>). In this setting,
a version of Leech's theorem <cit.>, or more precisely the implication
(i) ⇒ (ii) of <cit.>, shows that there exists
a vector valued multiplier Ψ∈(, ⊗ (⊕)) of norm
at most 1 such that
Φ(z) Ψ(z) = f(z) (z ∈ X).
In the statement of <cit.>, it is assumed that k satisfies an irreducibility
condition that implies that separates the points of X, but the proof shows that it suffices to assume that k is normalized.
Write
Ψ(z) =
[ φ(z); Ψ(z) ],
where φ∈() and Ψ(z) ∈(, ⊗)
both have norm at most one.
Then
φ(z) + f(z) b(z) Ψ(z) = f(z),
so that
f(z) ( 1 - b(z) Ψ(z)) = φ(z).
Defining ψ(z) = b(z) Ψ(z), we obtain the desired representation of f.
Lemma <ref> implies that 1 - ψ is a cyclic multiplier, which shows
that f ∈ N^+() and hence proves the additional assertion.
Corollary <ref> is now an immediate consequence of the preceding theorem.
Let be a normalized complete Pick space.
Then the zero sets for , ()
and N^+() agree. In particular, the union of two zero sets for is a zero set for .
Observe that
() ⊂⊂ N^+(),
where the first inclusion holds since is normalized, so that 1 ∈,
and the second inclusion follows from Theorem <ref>.
It is immediate from the definition of N^+()
that every zero set for N^+() is a zero set for (),
so that the zero sets for all three spaces agree. Since ()
is an algebra, the union of two zero sets is a zero set.
To obtain equality of the zero sets for and for (), it suffices
to assume that is a complete Pick space whose kernel does not vanish anywhere.
Indeed, in this case, the kernel can be normalized at a point (see <cit.>),
which does not affect the zero sets.
It is also not hard to deduce Corollary <ref> from Theorem <ref>.
Let be a normalized complete Pick space and let ρ be a bounded non-zero functional on .
Then the following are equivalent.
* ρ(φ g) = ρ(φ) ρ(g) for all φ∈() and all g ∈.
* ρ(f g) = ρ(f) ρ(g) whenever f,g ∈ such that f g ∈.
The implication (ii) ⇒ (i) is trivial since () ⊂.
Conversely, assume that (i) holds and let f,g ∈
such that f g ∈. By Theorem <ref>, we may write f = φ /ψ, where φ , ψ∈()
and ψ is cyclic.
We claim that ρ(ψ) ≠ 0.
Indeed, if ρ(ψ) = 0, then
ρ(ψ g) = 0 for all g ∈ and hence ρ = 0, as ψ is cyclic.
Therefore, ρ(ψ) ≠ 0.
Moreover,
ρ(ψ) ρ(f g) = ρ(ψ f g) = ρ( φ g) = ρ(φ) ρ(g) = ρ(ψ f) ρ(g)
= ρ(ψ) ρ(f) ρ(g).
Since ρ(ψ) ≠ 0, it follows that ρ(f g) = ρ(f) ρ(g).
§ GLEASON PARTS
Gleason parts were originally introduced in the context of uniform algebras,
but it is possible to generalize this notion to multiplier algebras of
reproducing kernel Hilbert spaces, and indeed to arbitrary unital operator algebras of functions.
This was observed by Rochberg <cit.> for multiplier algebras of
certain weighted Dirichlet spaces on ,
but his arguments readily generalize. For completeness, we provide the proof of the lemma below.
It is an easy adaptation of <cit.>.
Let be a reproducing kernel Hilbert space and let ρ_1,ρ_2 be characters
on (). Then the following are equivalent.
* ||ρ_1 - ρ_2 || < 2.
* || ρ_1 |_(ρ_2)|| < 1.
* Whenever (φ_n)_n is a sequence in the unit ball of () such that
lim_n →∞ |ρ_1(φ_n)| = 1, then lim_n →∞ |ρ_2(φ_n)| = 1.
In particular, the relation ρ_1 ∼ρ_2 if and only if ||ρ_1 - ρ_2|| < 2
is an equivalence relation.
For a ∈, let θ_a denote the conformal automorphism of defined by
θ_a(z) = a - z/1 - a z.
Thus, θ_a is an involution which interchanges 0 and a.
We will make repeated use of the following consequence of von Neumann's inequality:
if φ∈() with ||φ||_()≤ 1, then
θ_a ∘φ∈()
with ||θ_a ∘φ||_()≤ 1 for all a ∈.
(i) ⇒ (ii) Suppose that ||ρ_1 |_(ρ_2)|| =1. Then
there exists a sequence (φ_n) in the unit ball of (ρ_2) such that if t_n = ρ_1(φ_n), then
0 < t_n < 1 for all n ∈ and lim_n →∞ t_n = 1. Let a_n = (1 - √(1 - t_n^2))/t_n. It is straightforward
to check that a_n ∈ (0,1) and that θ_a_n(t_n) = - θ_a_n(0) for all n ∈. Thus,
||ρ_1 - ρ_2|| ≥ |ρ_1 (θ_a_n∘φ_n) - ρ_2( θ_a_n∘φ_n)| =
|θ_a_n (t_n) - θ_a_n (0)|
= 2 |a_n| 2.
Consequently, ||ρ_1 - ρ_2|| =2.
(ii) ⇒ (iii) Suppose that there exists a sequence (φ_n) in the unit ball
of () such that |ρ_1(φ_n)| tends to 1, but a_n = ρ_2(φ_n) is bounded away
from 1 in modulus. Then θ_a_n∘φ_n belongs to the unit ball of (ρ_2)
and it is easy to see that |ρ_1(θ_a_n∘φ_n)| tends to 1. So ||ρ_1 |_(ρ_2)|| = 1.
(iii) ⇒ (ii) Suppose that ||ρ_1 |_(ρ_2)|| = 1. Then there exists a sequence
(φ_n) in the unit ball of (ρ_2) such that lim_n →∞ |ρ_1(φ_n)| = 1.
Therefore, (iii) fails.
(ii) ⇒ (i) Suppose that ||ρ_1 - ρ_2|| = 2. Then there exists a sequence (φ_n) in the unit
ball of () such that a_n = ρ_1(φ_n) and b_n = ρ_2(φ_n) both belong to and |a_n - b_n|
tends to 2. Then θ_b_n∘φ_n belongs to the unit ball of (ρ_2) and
|ρ_1(θ_b_n∘φ_n)| = |θ_b_n (a_n)| ≥|a_n - b_n|/2 1.
Thus, ||ρ_1 |_(ρ_2)|| = 1.
Finally, it follows from (i) that ∼ is reflexive and symmetric, and it follows from (iii) that ∼ is transitive.
Let be a reproducing kernel Hilbert space on X.
Then the assignment φ↦ M_φ identifies ()
with a weak-* closed subalgebra of B(), hence () is a dual space in its own right,
and we endow it with this weak-* topology.
It is straightforward to check that on bounded subsets of (), the weak-* topology
coincides with the topology of pointwise convergence on X.
We are now in the position to prove a more detailed version of Proposition <ref>.
Let be a complete Pick space on a set X whose kernel is normalized at x_0 ∈ X
and let δ_0 denote the character of evaluation at x_0.
For a character ρ on (), the following assertions are equivalent.
* ρ extends to a bounded partially multiplicative functional on .
* ρ is weak-* continuous.
* ρ belongs to the Gleason part of δ_0.
It follows that the following are equivalent.
* X is a maximal domain for .
* Every weak-* continuous character on () is given by evaluation at a point in X.
* The characters of evaluation at points in X form a Gleason part in the maximal
ideal space of X.
(i) ⇒ (ii) If ρ is a bounded partially multiplicative functional
on which extends ρ, then ρ(φ) = ρ( M_φ 1) for φ∈(),
which shows that ρ is WOT continuous and thus weak-* continuous.
(ii) ⇒ (iii) Suppose that ρ is weak-* continuous and assume
toward a contradiction that ||ρ|_(δ_0)|| = 1.
Since δ_0 is weak-* continuous,
the unit ball of (δ_0) is weak-* compact, so there exists a multiplier
ψ of norm 1 such that ψ(x_0) = 0 and ρ(ψ)=1. If follows
from Lemma <ref> that |ψ(x)| < 1 for all x ∈ X, so the sequence
(ψ^n) converges to zero pointwise, and hence in the weak-* topology.
But ρ(ψ^n) = 1 for all n ∈, contradicting the fact that ρ is weak-* continuous.
Therefore, ||ρ|_(δ_0)|| < 1, so (iii) holds by Lemma <ref>.
(iii) ⇒ (i) Suppose that ρ belongs to the Gleason part of δ_0,
so that α = ||ρ|_(δ_0)|| < 1 by Lemma <ref>.
If f ∈ has a representation f = φ_1 / φ_2, where φ_1 and φ_2
are multipliers and ρ(φ_2) ≠ 0, we define
ρ(f) = ρ(φ_1)/ρ(φ_2).
By Theorem <ref>, every f ∈ can be written
as f = φ / (1 - ψ) with ||φ||_()≤ ||f||_ and
ψ belonging to the unit ball of (δ_0).
Since |ρ(ψ)| ≤α < 1 by assumption, we see in particular that every f ∈
admits a representation as above.
Since ρ is a character, ρ is well defined, linear and satisfies
ρ(φ f)
= ρ(φ) ρ(f) for φ∈() and f ∈.
Moreover, writing f = φ/(1 - ψ) as above,
we obtain the estimate
|ρ(f)| ≤|ρ(φ)|/1 - |ρ(ψ)|≤1/1 - α ||f||_,
so that ρ is a bounded functional that extends ρ, and ρ
is partially multiplicative by Corollary <ref>.
Finally, to deduce the equivalence of (i'), (ii') and (iii') from the equivalence of (i), (ii) and (iii),
it only remains to show that a bounded partially multiplicative functional on
is uniquely determined by its values on (). This can be seen as in the proof of Corollary
<ref>, or alternatively, if follows from the fact that
() is dense in .
Suppose that is a normalized complete Pick space on X and assume for convenience
that separates the points of X. If X is not a maximal
domain for , then Proposition <ref> provides three equivalent ways
of enlarging X to a maximal domain for . There is a fourth equivalent way, which we will now
briefly describe.
For a cardinal number d, let _d denote the open unit ball
_d in a Hilbert space of dimension d. The Drury-Arveson space H^2_d is the reproducing kernel
Hilbert space on _d with kernel
1/1 - ⟨ z,w ⟩.
Let b: X →_d be the map of Theorem <ref> and let S = b(X). Then
H^2_d |_S →, f ↦ f ∘ b,
is a unitary operator. Let
I = {φ∈(H^2_d): φ|_S = 0}
and let
V = {z ∈_d: φ(z) = 0 for all φ∈ I }.
(V is an analogue of the Zariski closure from algebraic geometry.)
Tautologically, S ⊂ V, and it was observed by Davidson, Ramsey and Shalit <cit.>
that H^2_d |_S can be identified with H^2_d |_V. More precisely,
every function in H^2_d |_V extends uniquely to a function in H^2_d |_S.
Moreover, <cit.>
and its proof show that the partially multiplicative functionals on H^2_d |_S (and hence on )
precisely correspond to points in V.
In particular, this last description of a maximal domain for shows that remains
a complete Pick space after enlarging the domain X to a maximal one.
§ SPACES ON COMPACT SETS AND THE CORONA THEOREM
In this section, we study spaces of continuous functions on compact sets and prove
Proposition <ref> and Theorem <ref>. We begin with a few preliminaries about
corona theorems.
Let be a reproducing kernel Hilbert space on a set X such that () separates the points of X
and let (()) denote the maximal ideal space of ().
Then X can be identified with a subset of (()) via point evaluations.
We say that the corona theorem holds for ()
if X is dense in (()) in the Gelfand topology.
Gelfand theory shows that the following two statements are equivalent.
* The corona theorem holds for ().
* If φ_1,…,φ_n ∈() such that
inf_x ∈ X∑_j=1^n |φ_j(x)| > 0,
then the ideal generated by φ_1,…,φ_n inside () is all of ().
We say that the one-function corona theorem holds for ()
if whenever φ∈() and inf_x ∈ X |φ(x)| > 0, then 1/φ∈().
Thus, the corona theorem for () implies the one-function corona theorem for ().
It is not hard to see that in the setting above, the one-function corona
theorem holds for () if and only if for every φ∈(),
we have
σ_() (φ) = {φ(x): x ∈ X }.
Here, σ_() denotes the spectrum in the unital Banach algebra ().
Similarly, the corona theorem holds for () if and only if for every n ∈,
we have
σ_() (φ_1,…,φ_n) = { (φ_1(x), …, φ_n(x)): x ∈ X }⊂^n.
If is a normalized complete Pick space, then
we may replace σ_() with other notions of joint spectrum. Indeed,
the Toeplitz corona theorem
(see <cit.>) shows that in this case,
σ_() (φ_1,…,φ_n) = σ_r (M_φ_1,…,M_φ_n)
= σ_T(M_φ_1,…,M_φ_n),
where σ_r and σ_T denote the right spectrum and the Taylor spectrum, respectively
(see <cit.> for a definition and discussion of these notions).
For normalized complete Pick spaces, it is easy to tell from the kernel whether the multiplier algebra
separates the points of the underlying set.
Let be a normalized complete Pick space on X with kernel k. Then the following are equivalent.
* () separates the points of X.
* separates the points of X.
* If z ≠ w, then k(·,z) ≠ k(·,w).
* If z ≠ w, then k(·,z) and k(·,w) are linearly independent.
(i) ⇒ (ii) ⇒ (iii) is trivial, and (iii) ⇒ (iv) follows
from the fact that k is normalized at a point.
Finally, if (iv) holds, then the Cauchy-Schwarz inequality and the Pick property, applied to the points z,w,
show that there exists φ∈() with φ(z) = 0 and φ(w) ≠ 0.
Thus, () separates the points of X if and only if the kernel is irreducible in the strong
sense of <cit.>.
Suppose now that is a normalized complete Pick space which separates the points of X.
When investigating whether the corona theorem holds for (), we wish to exclude
constructions such as the restriction of the Hardy space H^2 to 1/2,
which is really a space on in disguise. Thus, we will typically assume that X
is a maximal domain for .
In this section, we are interested in the case when X is a compact topological space
and the functions in are continuous on X.
Such spaces are easy to construct, as the following class of examples shows.
Let d ∈ and let be a complete Pick space on _d
with reproducing kernel of the form
k(z,w) = ∑_n=0^∞ a_n ⟨ z,w ⟩^n,
where a_0 = 1 and a_n > 0 for n ≥ 1. If ∑_n=0^∞ a_n < ∞,
but the power series ∑_n=0^∞ a_n t^n has radius of convergence 1,
then k extends to a continuous function on _d×_d,
and thus becomes a space of continuous functions on _d in a natural way.
It is not hard to see that _d is a maximal domain for such a space
(see, for example, <cit.>). Moreover,
since a_1 > 0, contains the coordinate functions, so that separates the points of _d.
More concretely, for s ∈, let
k(z,w) = ∑_n=0^∞ (n+1)^s (z w)^n (z,w ∈)
and let _s be the reproducing kernel Hilbert space on with kernel k.
This scale of spaces contains in particular the Bergman space (s=1), the Hardy space (s=0)
and the Dirichlet space (s=-1). If s ≤ 0, then is a normalized complete Pick space (see,
for example, <cit.>). If s < -1, then k satisfies the conditions
in the preceding paragraph, hence _s becomes normalized complete Pick
space on and is a maximal domain for .
Let be a normalized complete Pick space of continuous functions on a compact
set X which separates the points of X such that X is a maximal domain for .
Then the embedding of X into (()) via point evaluations is a homeomorphism
onto its image, hence X can be identified with a compact
subset of (()). Thus,
the corona theorem holds for () if and only if X = (()).
Moreover, a multiplier
φ∈() is bounded below on X if and only if it is non-vanishing.
Consequently, the one-function corona theorem for () holds if and only if
every non-vanishing multiplier on is invertible.
We are now in the position to prove Proposition <ref>.
Let be a
normalized complete Pick space of continuous functions
on a compact set X which separates the points of X such that X is a maximal domain for .
Then the following are equivalent.
* () = as vector spaces.
* The corona theorem holds for ().
* The one-function corona theorem holds for ().
(i) ⇒ (ii) Suppose that () = as vector spaces. Since the multiplier norm
dominates the norm of , an application of the open mapping theorem shows that
these two norms are in fact equivalent. Thus, if ρ is a character on (),
then ρ is a bounded functional on which is partially multiplicative.
Since X is a maximal domain for , the functional ρ equals evaluation
at a point in X.
(ii) ⇒ (iii) follows from Gelfand theory.
(iii) ⇒ (i) Let f ∈. By Theorem <ref>,
there are φ,ψ∈() with ψ non-vanishing such that
f = φ / ψ. The assumption (iii) implies that 1 / ψ∈(),
thus f ∈(). The reverse inclusion always holds, hence
= () as vector spaces.
The spaces _s of Example <ref>, where s < -1, satisfy
condition (i) of the preceding proposition (see Proposition 31 and Example 1 on page 99 in <cit.>).
Hence, ((_s)) = (see also Corollary 1 on page 95 in <cit.>).
We now use an example of Salas <cit.>, which answered <cit.>,
and Proposition <ref> to exhibit a complete Pick space
on for which the one-function corona theorem fails, thereby proving Theorem <ref>.
There exists a complete Pick space on with a reproducing kernel
of the form
k(z,w) = ∑_n=0^∞ a_n (z w)^n,
where a_0 = 1, a_n > 0 for all n ∈, lim_n →∞ a_n / a_n+1 = 1 and ∑_n a_n <∞
such that is a maximal domain for and such that
the one-function corona theorem for () fails.
In <cit.>, Salas constructs a weighted shift T on ℓ^2
with weight sequence (w_n) which satisfies
* w_n is decreasing and lim_n →∞ w_n = 1.
* ∑_n=0^∞β(n)^-2 < ∞, where β(n) = ∏_j=0^n-1 w_j.
* T is not strictly cyclic.
The original definition of strict cyclicity can be found in <cit.>;
we shall give an equivalent one.
Define a Hilbert space by
= { f(z) = ∑_n=0^∞f(n) z^n : ||f||^2 = ∑_n=0^∞ |f(n)|^2
β(n)^2 < ∞}.
Property (2) implies that is in fact
a reproducing kernel Hilbert space on whose reproducing kernel is given by
k(z,w) = ∑_n=0^∞ a_n (z w)^n,
where a_n = β(n)^-2, see <cit.>.
Property (3) is equivalent to saying that () ⊊,
see <cit.>.
We have a_0 = 1,
so k is normalized at 0.
Moreover,
a_n/a_n+1 = β(n+1)^2/β(n)^2 = w_n^2,
hence Property (1) implies that a_n / a_n+1 decreases to 1. An application
of a lemma of Kaluza (see Lemma 7.31 and 7.38 in <cit.>) shows that is a
complete Pick space.
Moreover, we see that the radius of convergence of the power series
∑_n=0^∞ a_n t^n is 1, hence is a
space of continuous functions on , which is a maximal domain for ,
and separates the points of
(see Example <ref>).
Since () ⊊,
the implication (iii) ⇒ (i) of Proposition <ref> shows
that the one-function corona theorem fails for ().
We call a space on as in the statement of Theorem <ref> a Salas space.
We can say slightly more about the maximal ideal space of the multiplier algebra of a Salas space .
It is easy to check (see, for example, <cit.>) that there exists a continuous map
π : (()) →, ρ↦ρ(z).
Since lim_n →∞ a_n / a_n+1 = 1,
it follows from <cit.> that if λ∈, then
π^-1(λ) is the singleton containing the character of evaluation at λ. In particular,
every character on () which is not given by evaluation at a point in
is contained in π^-1(∂).
Since there exists at least one such character by Theorem <ref>,
and since is rotationally invariant, we deduce
that π^-1(λ) contains at least one such character for every λ∈∂.
Moreover, since () has no non-trivial idempotent elements, the Šilov idempotent theorem (see <cit.>) shows that (()) is connected.
Let be a unital Banach algebra of continuous functions on such that the polynomials
form a dense subspace of and such that the maximal ideal space of is equal to .
In <cit.>, the following problem is studied: given δ > 0, does there exist
a constant C(δ) > 0 such that for all f ∈ with ||f||_≤ 1 and
inf_z ∈ |f(z)| ≥δ, the estimate
||f^-1||_≤ C(δ)
holds? The authors of <cit.> obtain a positive answer for rotationally
invariant algebras which satisfy some additional assumptions.
We can use a Salas space to obtain a rotationally invariant algebra for which the question above has
a negative answer. To this end, let be a Salas space and let A() denote the norm
closure of the polynomials inside of (). The map π of Remark <ref>
shows that the maximal ideal space of A() is . By Theorem <ref>,
there exists a multiplier φ in the unit ball of ()
such that δ = inf_z ∈ |φ(z)| > 0, but such that φ is not invertible
inside of (). For r ∈ (0,1), define φ_r(z) = φ(r z). Then each φ_r
is analytic in an open neighborhood of , so since σ_A()(z) =, we conclude that
φ_r ∈ A() for all r ∈ (0,1).
Clearly, |φ_r| is bounded below by δ for all r ∈ (0,1), so
each φ_r is invertible inside of A() by Gelfand theory. Moreover,
a routine application of the Poisson kernel, combined with rotational invariance of ,
shows that ||φ_r||_A()≤ ||φ||_()≤ 1 for all r ∈ (0,1).
We claim that ||φ_r^-1||_A() is not bounded as r → 1. Indeed, suppose otherwise.
Then by weak-* compactness of the closed unit ball of (), the net (φ_r^-1)_r < 1
has a weak-* cluster point ψ∈(). In particular, φ_r^-1 converges to ψ
pointwise on , so that ψ = φ^-1, contradicting the fact that φ is not invertible
inside of ().
If is a Salas space, then the polynomials are not norm dense in (), since the corona
theorem fails for (). However, besides this and Remark <ref>, we know very little
about the size of () or of its maximal ideal space.
Let be a Salas space on .
* Is () separable?
* Is (()) metrizable?
* Is the cardinality of (()) equal to that of the continuum?
Observe that a positive answer to any of these questions implies
a positive answer to the questions below it.
Recall that H^2_d denotes the Drury-Arveson space on _d, the
open unit ball in a Hilbert space of dimension d.
If d = ℵ_0, we simply write _∞ and H^2_∞.
In the case d < ∞, Costea, Sawyer and Wick <cit.> showed that the corona theorem
holds for (H^2_d). Fiang and Xia <cit.> provide
a more elementary proof of the one-function corona theorem in this case; an even shorter
proof was found by Richter and Sunkes <cit.>. But none of these proofs
extend to infinite d in a straightforward manner.
Let be a Salas space.
It follows from Theorem <ref> of Agler and McCarthy that ()
can be identified with
{φ|_V: φ∈(H^2_∞) }
for a set V ⊂_∞ (indeed, we can choose V= b(), where b
is the map from Theorem <ref>).
We are not aware of an argument which would show that the failure of the corona
theorem for () implies the failure of the corona theorem for (H^2_∞).
We therefore ask:
Does the corona theorem hold for (H^2_∞)? Does the one-function corona theorem hold for (H^2_∞)?
amsplain
|
http://arxiv.org/abs/1701.08141v1 | 20170127182955 | Hybrid modeling and prediction of dynamical systems | [
"Franz Hamilton",
"Alun Lloyd",
"Kevin Flores"
] | math.DS | [
"math.DS"
] |
Hybrid Modeling and Prediction of Dynamical Systems
Franz Hamilton1,2*,
Alun Lloyd 1,2,3,
Kevin Flores 1,2,4
1 Department of Mathematics, North Carolina State University, Raleigh, NC, USA
2 Center for Quantitative Sciences in Biomedicine, North Carolina State University, Raleigh, NC, USA
3 Biomathematics Graduate Program, North Carolina State University, Raleigh, NC, USA
4 Center for Research in Scientific Computation, North Carolina State University, Raleigh, NC, USA
* fwhamilt@ncsu.edu
§ ABSTRACT
Scientific analysis often relies on the ability to make accurate predictions of a system's dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model's equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data.
§ AUTHOR SUMMARY
The question of how best to predict the evolution of a dynamical system has received substantial interest in the scientific community. While traditional mechanistic modeling approaches have dominated, data-driven approaches which rely on data to build predictive models have gained increasing popularity. The reality is, both approaches have their drawbacks and limitations. In this article we ask the question of whether or not a hybrid approach to prediction, which combines characteristics of both mechanistic modeling and data-driven modeling, can offer improvements over the standalone methodologies. We analyze the performance of these methods in two model systems and then evaluate them on experimentally collected population data.
§ INTRODUCTION
Parametric modeling involves defining an underlying set of mechanistic equations which describe a system's dynamics. These mechanistic models often contain a number of unknown parameters as well as an uncertain state, both of which need to be quantified prior to use of the model for prediction. The success of parametric prediction is tied closely to the ability to construct accurate estimates of the model parameters and state. This can be particularly challenging in high dimensional estimation problems as well as in chaotic systems <cit.>. Additionally, there is often a degree of model error, or a discrepancy between the structure of the model and that of the system, further complicating the estimation process and hindering prediction accuracy.
Despite these potential issues, mechanistic models are frequently utilized in data analysis. The question we aim to address is when is it advantageous to use them? Under suitable conditions where model error is relatively small and parameters can be reliably estimated, parametric predictions can provide a great deal of accuracy. However, as we will see in the subsequent examples, a large uncertainty in the initial parameter values often leads to inaccurate estimates resulting in poor model-based predictions.
An alternative approach to modeling and prediction abandons the use of any mechanistic equations, instead relying on predictive models built from data. These nonparametric methods have received considerable attention, in particular those methods based on Takens' delay-coordinate method for attractor reconstruction <cit.>. The success of nonparametric methods is strongly influenced by the amount of data available as well as the dimension of the dynamical system. If only a sparse amount of training data is available, the result is often inaccurate predictions due to the lack of suitable nearby neighbors in delay-coordinate space. Furthermore, as the dimension and complexity of the dynamical system increases, nonparametric prediction becomes significantly more difficult due to the necessary data requirements <cit.>.
Several recent works have investigated the situation where only a portion of a mechanistic model is known <cit.>. Our motivation here though is to explore how best to use a full mechanistic model when it is available. We consider a hybrid methodology to modeling and prediction that combines the complementary features of both parametric and nonparametric methods. In our proposed hybrid method, a subset of a mechanistic model's equations are replaced by nonparametric evolution. These nonparametrically advanced variables are then incorporated into the remaining mechanistic equations during the data fitting and prediction process. The result of this approach is a more robust estimation of model parameters as well as an improvement in short-term prediction accuracy when initial parameter uncertainty is large.
The utility of this method is demonstrated in several example systems. The assumption throughout is that noisy training data from a system are available as well as a mechanistic model that describes the underlying dynamics. However, several of the model parameters are unknown and the model state is uncertain due to the noisy measurements. The goal is to make accurate predictions of the system state up to some forecast horizon beyond the end of the training data. We compare the prediction accuracy of the standard parametric and nonparametric methodologies with the novel hybrid method presented here.
We begin our analysis by examining prediction in the classical Lorenz-63 system <cit.>, which exhibits chaotic dynamics. Motivated by the success of the hybrid method in the Lorenz-63 system, we consider a more sophisticated example of predicting the spiking dynamics of a neuron in a network of Hindmarsh-Rose <cit.> cells. Finally, we examine the prediction problem in a well-known experimental dataset from beetle population dynamics <cit.>.
§ MATERIALS AND METHODS
The assumption throughout is that a set of noisy data is available over the time interval [t(0),t(T)]. This is referred to as the training data of the system. Using these training data, the question is how best to predict the system dynamics over the interval [t(T+1),t(T+T_F)], known as the prediction interval. Standard parametric and nonparametric methods are presented before our discussion of the novel hybrid method which blends the two approaches.
§.§ Parametric Modeling and Prediction
When a full set of mechanistic equations is used for modeling and prediction, we refer to this as the parametric approach. Assume a general nonlinear system of the form
𝐱(k+1) = 𝐟(t(k),𝐱(k),𝐩)+𝐰(k)
𝐲(k) = 𝐡(t(k),𝐱(k),𝐩)+𝐯(k)
where x= [x_1,x_2,,x_n]^ T is an n-dimensional vector of model state variables and p = [p_1,p_2,,p_l]^ T is an l-dimensional vector of model parameters which may be known from first principles, partially known or completely unknown. f represents our system dynamics which describe the evolution of the state 𝐱 over time and h is an observation function which maps x to an m-dimensional vector of model observations, y = [y_1,y_2,,y_m]^ T. To simplify the description of our analysis, we assume that the training data maps directly to some subset of 𝐱. w(k) and v(k) are assumed to be mean 0 Gaussian noise terms with covariances 𝐐 and 𝐑 respectively. While discrete notation is used in Eq. <ref> for notational convenience, the evolution of x is often described by continuous-time systems. In this situation numerical solvers, such as Runge-Kutta or Adams-Moulton methods, are used to obtain solutions to the continuous-time system at discrete time points.
When the state of a system is uncertain due to noisy or incomplete observations, nonlinear Kalman filtering can be used for state estimation <cit.>. Here we choose the unscented Kalman filter (UKF), which approximates the propagation of the mean and covariance of a random variable through a nonlinear function using a deterministic ensemble selected through the unscented transformation <cit.>. We initialize the filter with state vector 𝐱^+(0) and covariance matrix 𝐏^+(0). At the kth step of the filter there is an estimate of the state 𝐱^+(k-1) and the covariance matrix 𝐏^+(k-1). In the UKF, the singular value decomposition is used to find the square root of the matrix 𝐏^+(k-1), which is used to form an ensemble of 2n+1 state vectors.
The model 𝐟 is applied to the ensemble, advancing it forward one time step, and then observed with 𝐡. The weighted average of the resulting state ensemble gives the prior state estimate 𝐱^-(k) and the weighted average of the observed ensemble is the model-predicted observation 𝐲^-(k). Covariance matrices 𝐏^-(k) and 𝐏^𝐲(k) of the resulting state and observed ensemble, and the cross-covariance matrix 𝐏^𝐱𝐲(k) between the state and observed ensembles, are formed and the equations
𝐊(k) = 𝐏^𝐱𝐲(k)(𝐏^𝐲(k))^-1
𝐏^+(k) = 𝐏^-(k)-𝐏^xy(k)(𝐏^y(k))^-1𝐏^yx(k)
𝐱^+(k) = 𝐱^-(k)+𝐊(k)(𝐲(k)-𝐲^-(k) ).
are used to update the state and covariance estimates with the observation 𝐲(k).
The UKF algorithm described above can be extended to include the joint estimation problem allowing for parameter estimation. In this framework, the parameters 𝐩 are considered as auxiliary state variables with trivial dynamics, namely 𝐩_k+1 = 𝐩_k. An augmented n+l dimensional state vector can then be formed consisting of the original n state variables and l model parameters allowing for simultaneous state and parameter estimation <cit.>.
To implement parametric prediction, the UKF is used to process the training data and obtain an estimate of 𝐩, as well as the state at the end of the training set, 𝐱(T). The parameter values are fixed and Eq. <ref> is forward solved from t(T) to generate predictions of the system dynamics over the prediction interval [t(T+1),t(T+T_F)]. Namely, predictions x(T+1),x(T+2),,x(T+T_F) are calculated.
§.§ Takens' Method for Nonparametric Prediction
Instead of using the mechanistic model described by Eq. <ref>, the system can be represented nonparametrically. Without loss of generality consider the observed variable x_j. Using Takens' theorem <cit.>, the d+1 dimensional delay coordinate vector x_j^d(T) = [x_j(T),x_j(T-τ),x_j(T-2τ), x_j(T-dτ)] is formed which represents the state of the system at time t(T). Here d is the number of delays and τ is the time-delay.
The goal of nonparametric prediction is to utilize the training data in the interval [t(0),t(T) ] to build local models for predicting the dynamics over the interval [t(T+1),t(T+T_F)]. Here, the method of direct prediction is chosen. Prior to implementation of the direct prediction, a library of delay vectors is formed from the training data of x_j.
Direct prediction begins by finding the κ nearest neighbors, as a function of Euclidean distance, to the current delay-coordinate vector x_j^d(T) within the library of delay vectors. Neighboring delay vectors
x_j^d(T') = [x_j(T'),x_j(T'-τ),x_j(T'-2τ), x_j(T'-dτ)]
x_j^d(T”) = [x_j(T”),x_j(T”-τ),x_j(T”-2τ), x_j(T”-dτ)]
⋮
x_j^d(T^κ) = [x_j(T^κ),x_j(T^κ-τ),x_j(T^κ-2τ), x_j(T^κ-dτ)]
are found within the training data and the known x_j(T'+i), x_j(T”+i), …, x_j(T^κ+i) points are used in a local model to predict the unknown value x_j(T+i) where i = 1,2,,T_F. In this article, a locally constant model is chosen
x_j(T+i) ≈ w_j'x_j(T'+i) + w_j”x_j(T”+i) + + w_j^κx_j(T^κ+i)
where w_j',w_j”,,w_j^κ are the weights for the j^th state that determine the contribution of each neighbor in building the prediction. In its simplest form, Eq. <ref> is an average of the nearest neighbors where w_j' = w_j” = = w_j^κ = 1/κ. More sophisticated weighting schemes can be chosen, for example assigning the weights based on the Euclidean distance from each neighbor to the current delay vector <cit.>. Selection of values for d, τ and κ is necessary for implementation of the direct prediction algorithm. These values were optimized, within each example, to give the lowest prediction error (results not shown).
The accuracy of the predicted x_j(T+i) is subject to several factors. The presence of noise in the training data plays a substantial role in decreasing prediction accuracy. However, recent advancements in nonparametric analysis have addressed the problem of filtering time series without use of a mechanistic model. In <cit.>, a nonparametric filter was developed which merged Kalman filtering theory and Takens' method. The resulting Kalman-Takens filter was demonstrated to be able to reduce significant amounts of noise in data. Application of the method was extended in <cit.> to the case of filtering stochastic variables without a model. In the results presented below, the training data used for nonparametric prediction are filtered first using the method of <cit.>.
§.§ Hybrid Modeling and Prediction: Merging Parametric and Nonparametric Methods
As an alternative to the parametric and nonparametric methods described above, we propose a hybrid approach which blends the two methods together. In this framework, we assume that a full mechanistic model as described by Eq. <ref> is available. However, rather than using the full model, a subset of the mechanistic equations are used and the remainder of the variables are represented nonparametrically using delay-coordinates.
In formulating this method it is convenient to first think of Eq. <ref> without vector notation
x_1(k+1) = f_1(t(k),x_1(k),x_2(k),,x_n(k),p_1,p_2,,p_l)
x_2(k+1) = f_2(t(k),x_1(k),x_2(k),,x_n(k),p_1,p_2,,p_l)
⋮
x_n(k+1) = f_n(t(k),x_1(k),x_2(k),,x_n(k),p_1,p_2,,p_l)
Now assume only the first n-1 equations of Eq. <ref> are used to model state variables x_1,x_2,…,x_n-1, while x_n is described nonparametrically
x_1(k+1) = f_1(t(k),x_1(k),x_2(k),,x_n-1(k),x_n(k),p_1,p_2,,p_l)
x_2(k+1) = f_2(t(k),x_1(k),x_2(k),,x_n-1(k),x_n(k),p_1,p_2,,p_l)
⋮
x_n-1(k+1) = f_n-1(t(k),x_1(k),x_2(k),,x_n-1(k),x_n(k),p_1,p_2,,p_l)
x_n(k+1) ≈ w_n'x̃_n(T'+k+1) + w_n”x̃_n(T”+k+1) + + w_n^κx̃_n(T^κ+k+1)
We refer to Eq. <ref> as the hybrid model. Note, in Eq. <ref> only x_n is assumed to be advanced nonparametrically. This is done purely for ease of presentation and the hybrid model can instead contain several variables whose equations are replaced by nonparametric advancement.
The hybrid model has several distinguishing features. Notice, in this framework nonparametrically advanced dynamics are incorporated into mechanistic equations, essentially merging the two lines of mathematical thought. Furthermore, equations for state variables within Eq. <ref> can be replaced only if there are observations which map directly to them, otherwise their dynamics can not be nonparametrically advanced. Finally, the process of replacing equations in the hybrid method will generally result in a reduction in the number of unknown model parameters to be estimated.
In this hybrid scheme, obtaining an estimate of the unknown parameters in the n-1 mechanistic equations and an estimate of x(T) requires a combination of the nonparametric analysis developed in <cit.> and traditional parametric methodology. The state variable x_n, which is not defined by a mechanistic equation in Eq. <ref>, is represented by delay coordinates within the UKF. Therefore at step k we have the hybrid state
𝐱^ H(k) = [x_1(k),x_2(k) ,…, x_n-1(k),x_n(k), x_n(k-τ) , x_n(k-2τ),… , x_n(k-dτ)]^ T
The UKF as described above is implemented with this hybrid state 𝐱^ H(k) and the model described by Eq. <ref>. Notice that in the case of the hybrid model when we have to advance the state dynamics and form the prior estimate in the UKF, the advancement is done parametrically for the first n-1 states and nonparametrically for the n^th state. Similarly to before, we can augment 𝐱^ H with the unknown parameters in the n-1 mechanistic equations allowing for simultaneous parameter estimation.
Once the training data are processed and an estimate of 𝐱^ H(T) and the parameters are obtained, the hybrid model in Eq. <ref> is implemented to generate predictions 𝐱^ H(T+1), 𝐱^ H(T+2),…, 𝐱^ H(T+T_F).
§ RESULTS
We demonstrate the utility of the hybrid methodology, with comparison to standard parametric and nonparametric modeling and prediction, in the following example systems. When conducting this analysis, two types of error are considered. The first, error in the observations, manifests itself as noise in the training data which all three methods will have to confront. The second type, error in the parameters, takes the form of an uncertainty in the initial parameter values used by the UKF for parameter estimation. Only the parametric and hybrid methods will have to deal with this parameter error. Throughout, we will refer to a percentage uncertainty which corresponds to the standard deviation of the distribution from which the initial parameter value is drawn relative to the mean. For example, if the true value for a parameter p_1 is 12 and we have 50% uncertainty in this value, then the initial parameter value used for estimating p_1 will be drawn from the distribution N(12,(0.5*12)^2).
To quantify prediction accuracy, the normalized root-mean-square-error, or SRMSE, is calculated for each prediction method as a function of forecast horizon. Normalization is done with respect to the standard deviation of the variable as calculated from the training data. In using the SRMSE metric, the goal is to be more accurate than if the prediction was simply the mean of the training data (corresponding to SRMSE = 1). Thus a prediction is better than a naive prediction when SRMSE < 1, though for chaotic systems prediction accuracy will eventually converge to this error level since only short-term prediction is possible.
§.§ Prediction in the Lorenz-63 System
As a demonstrative example, consider the Lorenz-63 system <cit.>
ẋ = σ(y-x)
ẏ = x(ρ-z)-y
ż = xy-β z
where σ = 10, ρ = 28, β = 8/3. Data are generated from this system using a fourth-order Adams-Moulton method with sample rate h = 0.05. We assume that 500 training data points of the x, y and z variables are available, or 25 units of time. The Lorenz-63 system oscillates approximately once every unit of time, meaning the training set consists of about 25 oscillations. The goal is to accurately predict the dynamics of x, y and z one time unit after the end of the training set. However, the observations of each variable are corrupted by Gaussian observational noise with mean zero and variance equal to 4. Additionally the true value of parameters σ, ρ and β are unknown. Fig. <ref> shows an example simulation of this system.
The parametric method utilizes Eq. <ref> to estimate the model state and parameters, and to predict the x, y and z dynamics. For the nonparametric method, delay coordinates of the variables are formed with d = 9 and τ = 1. The local constant model for prediction is built using κ = 20 nearest neighbors. For the hybrid method, the mechanistic equation governing the dynamics of y are replaced nonparametrically resulting in the reduced Lorenz-63 model
ẋ = σ(y-x)
ż = xy-β z
Note, the hybrid model does not require estimation of the ρ parameter since the mechanistic equation for y is removed.
Fig. <ref> shows a comparison of parametric (black), nonparametric (blue) and hybrid (red) prediction error as a function of forecast horizon. SRMSE results averaged over 500 system realizations. Various parameter uncertainty levels are shown: 80% uncertainty (solid lines), 50% uncertainty (dashed-dotted lines) and 20% uncertainty (dashed line). The hybrid method with 80% uncertainty offers improved short-term prediction of the Lorenz-63 x (Fig. <ref>a) and z (Fig. <ref>c) variables over standalone nonparametric prediction as well as parametric prediction with 80% uncertainty. Hybrid and nonparametric prediction of y (Fig. <ref>b) are comparable, which is to be expected since the hybrid approach is using nonparametric advancement of y in its formulation. Note that parametric prediction at this uncertainty level does very poorly and in the cases of y and z its result is not shown due to the scale of the error. As the uncertainty decreases for parametric prediction, its performance improves. However, hybrid prediction with 80% uncertainty still outperforms parametric prediction with 50% uncertainty in the short-term. At a small uncertainty level, parametric prediction outperforms both hybrid and nonparametric methods which is to be expected since it has access to the true model equations and starts out with close to optimal parameter values.
The success of the hybrid method at higher uncertainty levels can be traced to more accurate estimates of the model parameters in the mechanistic equations that it uses. Table <ref> shows the resulting hybrid and parametric estimation of the Lorenz-63 parameters. The hybrid method with 80% uncertainty is able to construct accurate estimates of both σ and β, with a mean close to the true value and a small standard deviation of the estimates. The parametric method with 80% and 50% uncertainty is unable to obtain reliable estimates, exemplified by the large standard deviation of the estimates. Only when the parametric method has a relatively small uncertainty of 20% is it able to accurately estimate the system parameters.
§.§ Predicting Neuronal Network Dynamics
We now consider the difficult high dimensional estimation and prediction problem posed by neuronal network studies. If we are only interested in predicting a portion of the network, then we can use the proposed hybrid method to refine our estimation and prediction while simultaneously reducing estimation complexity. As an example
of this potential network application we consider the prediction of spiking dynamics in a network of M Hindmarsh-Rose neurons <cit.>
ẋ_i = y_i-a_ix_i^3+b_ix_i^2-z_i+1.2+∑_i≠ m^M β_im/1+9e^-10x_mx_m
ẏ_i = 1-c_ix_i^2
ż_i = 5× 10^-5[4(x_i-(-8/5))-z_i ]
where i = 1,2,,M. x_i corresponds to the spiking potential while y_i and z_i describe the fast and slow-scale dynamics, respectively, of neuron i. Each individual neuron in the network has parameters a_i =1, b_i = 3 and c_i = 5 which are assumed to be unknown. β_im represents the connectivity coefficient from neuron i to neuron m. For a network of size M, we have M^2-M possible connection parameters since neuron self connections are not allowed (i.e. β_ii = 0). These connection parameters are also assumed to be unknown.
For this example we examine networks of size M = 3 with 5 random connections. Data from these networks are generated using a fourth-order Adams-Moulton method with sample rate h = 0.08 ms. We assume that the training data consists of 3000 observations, or 240 ms, of the x_1, x_2,x_3 variables each of which are corrupted by Gaussian noise with mean 0 and variance of 0.2. Under the stated parameter regime, the neurons in the network spike approximately every 6 ms, meaning our training set has on average around 40 spikes per neuron. In this example, we restrict our focus to predicting 8 ms of the x_3 variable (though a similar analysis follows for the prediction of x_1 and x_2). Fig. <ref>a shows a representative realization of this problem. Given our interest in x_3, the hybrid method only assumes a mechanistic equation for neuron 3
ẋ_3 = y_3-a_3x_3^3+b_3x_3^2-z_3+1.2+∑_3 ≠ m^M β_3m/1+9e^-10x_mx_m
ẏ_3 = 1-c_3x_3^2
ż_3 = 5× 10^-5[4(x_3-(-8/5))-z_3 ]
and nonparametrically represents neuron 1 and neuron 2.
Fig. <ref>b shows the resulting accuracy in predicting x_3 when using parametric (black), nonparametric (blue) and hybrid (red) methods with 80% (solid line) and 50% (dashed-dotted line) uncertainty in parameter values. The parametric approach uses the full mechanistic model described by Eq. <ref> for modeling and prediction, requiring estimation of the x,y and z state variables and parameters a,b and c for each neuron, as well as the full connectivity matrix. Notice that once again with 80% uncertainty, the scale of error for the parametric method is much larger compared to the other methods. Only with 50% uncertainty is the parametric method able to provide reliable predictions of x_3. Note that unlike in the Lorenz-63 example, we do not consider the parametric method with 20% uncertainty since reasonable parameter estimates and predictions are obtained with 50% uncertainty. The nonparametric method (τ = 1, d = 9) uses κ = 10 neighbors for building the local model for prediction. Again we observe that the hybrid method, even with a large parameter uncertainty of 80%, provides accurate predictions of x_3 compared to the other methods. Table <ref> shows the robustness of the hybrid method in estimating the individual parameters for neuron 3.
§.§ Predicting Flour Beetle Population Dynamics
We now investigate the prediction problem in a well-known data set from an ecological study involving the cannibalistic red flour beetle Tribolium castaneum. In <cit.>, the authors present experimentally collected data and a mechanistic model describing the life cycle dynamics of T. castaneum. Their discrete time model describing the progression of the beetle through the larvae, pupae, and adult stages is given by
L(t+1) = bA(t) e^-c_elL(t) - c_eaA(t)
P(t+1) = L(t)(1-μ_l)
A(t+1) = P(t) e^-c_paA(t)+A(t)(1-μ_a)
where L, P and A correspond to larvae, pupae and adult populations, respectively. The essential interactions described by this model are (i) flour beetles become reproductive only in the adult stage, (ii) adults produce new larvae, (iii) adults and larvae can both cannibalize larvae, and (iv) adults cannibalize pupae. We note that since Eq. <ref> only approximates the life cycle dynamics of the beetle, there is a degree of model error in the proposed system, unlike the previous examples.
The authors of <cit.> experimentally set the adult mortality rate (μ_a) to 0.96 and the recruitment rate (c_pa) from pupae to adult to seven different values (0, 0.05, 0.10, 0.25, 0.35, 0.50, 1.0). Experiments at each recruitment rate value were replicated three times resulting in 21 different datasets. Each dataset consists of total numbers of larvae, pupae, and adults measured bi-weekly over 82 weeks resulting in 41 measurements for each life stage. These data were fit to Eq. <ref> in <cit.> and parameter estimates b = 6.598, c_el = 1.209 × 10^-2, c_ea = 1.155 × 10^-2 and μ_l = 0.2055 were obtained. We treat these parameter values as ground truth when considering the different parameter uncertainty levels for fitting the data to the model.
In our analysis of this system, we treat the first 37 measurements (or 74 weeks) within an experiment as training data and use the remaining 4 time points (or 8 weeks) for forecast evaluation. Fig. <ref> shows an example of this setup for a representative dataset. Fig. <ref> shows the results of predicting the larvae (Fig. <ref>a), pupae (Fig. <ref>b) and adult (Fig. <ref>c) populations using parametric (black), nonparametric (blue) and hybrid prediction methods with 80% (solid line) and 50% (dashed-dotted line) parameter uncertainty levels. Error bars correspond to the standard error over the 21 datasets. The parametric method uses the full mechanistic model described in Eq. <ref> to estimate the population state and parameters b, c_el, c_ea and μ_l before prediction. We note in Fig. <ref> that the parametric method with 80% uncertainty is not shown due to the scale of the error, and is significantly outperformed by the nonparametric prediction (τ = 1, d = 2, κ = 5). For the hybrid method, we only consider the mechanistic equations for pupae and adult population dynamics
P(t+1) = L(t)(1-μ_1)
A(t+1) = P(t) e^-c_paA(t)+A(t)(1-μ_a)
and nonparametrically represent larvae. Hybrid prediction with 80% uncertainty outperforms both nonparametric and parametric with 80% uncertainty for pupae and adult population levels, and is comparable to parametric with 50% uncertainty.
§ CONCLUSION
By blending characteristics of parametric and nonparametric methodologies, the proposed hybrid method for modeling and prediction offers several advantages over standalone methods. From the perspective of model fitting and the required parameter estimation that arises in this process, we have shown that the hybrid approach allows for a more robust estimation of model parameters. Particularly for situations where there is a large uncertainty in the true parameter values, the hybrid method is able to construct accurate estimates of model parameters when the standard parametric model fitting fails to do so. At first this may seem counter-intuitive, but in fact it is not that surprising. The replacement of mechanistic equations with their nonparametric representations in effect reduces the dimension of the parameter space that we have to optimize in, resulting in better parameter estimates. As we have demonstrated in the above examples, this refinement in the parameter estimates leads to an improvement in short-term prediction accuracy.
The limitations of the hybrid method are similar to those of parametric and nonparametric methods in that if not enough training data are available then accurate estimation and prediction becomes difficult. However, the demonstrated robustness of the hybrid method to large parameter uncertainty is encouraging, particularly when considering experimental situations where we may not have a good prior estimate of the model parameters. One could consider implementing the hybrid method in an iterative fashion, estimating the parameters of each equation separately, then piecing the model back together for prediction. We can think of this as an iterative hybrid method, and is the subject of future work.
We view this work as complementary to recent publications on forecasting <cit.>. The authors of <cit.> advocate nonparametric methods over parametric methods in general, while a letter <cit.> addressing the work of <cit.> showed that a more sophisticated method for model fitting results in better parameter estimates and therefore model-based predictions which outperform model-free methods. Our results support the view that no one method is uniformly better than the other. As we showed in the above examples, in situations where the model error and uncertainty in initial parameters are relatively small, the parametric approach outperforms other prediction methods. Often in experimental studies though, we are not operating in this ideal situation and instead are working with a model that has substantial error with a large uncertainty in parameters which can lead to inaccurate system inference. In situations such as these, nonparametric methods are particularly useful.
The main appeal of the hybrid method is that we can confront these situations without having to completely abandon the use of the mechanistic equations. This is important since mechanistic models often provide valuable information about the underlying processes governing the system dynamics. While we explored in detail the robustness of the hybrid method to large levels of parameter uncertainty, its usefulness stretches well beyond that. In some instances, we may only have a model for some of the states or portions of the model may have higher error than others. By supplementing these parts with their nonparametric representation, the hybrid method would allow us to only use the parts of the model we are confident in and thus improve our analysis.
10
voss
Voss H, Timmer J, Kurths J.
Nonlinear dynamical system identification from uncertain and indirect
measurements.
Int J Bif Chaos. 2002;14:1905–1924.
baake
Baake E, Baake M, Bock H, Briggs K.
Fitting ordinary differential equations to chaotic data.
Physical Review A. 1992;45:5524–5529.
farmer
Farmer J, Sidorowich J.
Predicting chaotic time series.
Phys Rev Lett. 1987;59:845–848.
casdagli1989nonlinear
Casdagli M.
Nonlinear prediction of chaotic time series.
Physica D: Nonlinear Phenomena. 1989;35(3):335–356.
Sugihara:1990aa
Sugihara G, May RM.
Nonlinear forecasting as a way of distinguishing chaos from
measurement error in time series.
Nature. 1990;344(6268):734–741.
smith1992identification
Smith LA.
Identification and prediction of low dimensional dynamics.
Physica D: Nonlinear Phenomena. 1992;58(1):50–76.
jimenez1992forecasting
Jimenez J, Moreno J, Ruggeri G.
Forecasting on chaotic time series: A local optimal
linear-reconstruction method.
Phys Rev A. 1992;45(6):3553.
sauer94
Sauer T.
Time series prediction by using delay coordinate embedding.
In: Time Series Prediction: Forecasting the Future and Understanding
the Past. Addison Wesley; 1994. p. 175–193.
sugihara1994nonlinear
Sugihara G.
Nonlinear forecasting for the classification of natural time series.
Philosophical Transactions of the Royal Society of London Series A:
Physical and Engineering Sciences. 1994;348(1688):477–495.
schroer1998predicting
Schroer CG, Sauer T, Ott E, Yorke JA.
Predicting chaos most of the time from embeddings with
self-intersections.
Phys Rev Lett. 1998;80(7):1410.
kugiumtzis1998regularized
Kugiumtzis D, Lingjærde O, Christophersen N.
Regularized local linear prediction of chaotic time series.
Physica D: Nonlinear Phenomena. 1998;112(3):344–360.
yuan
Yuan G, Lozier M, Pratt L, Jones C, Helfrich K.
Estimating the predicability of an oceanic time series using linear
and nonlinear methods.
J Geophys Res. 2004;109:C08002.
hsieh2005distinguishing
Hsieh CH, Glaser SM, Lucas AJ, Sugihara G.
Distinguishing random environmental fluctuations from ecological
catastrophes for the North Pacific Ocean.
Nature. 2005;435(7040):336–340.
strelioff2006medium
Strelioff CC, Hübler AW.
Medium-term prediction of chaos.
Phys Rev Lett. 2006;96(4):044101.
regonda
Regonda S, Rajagopalan B, Lall U, Clark M, Moon YI.
Local polynomial method for ensemble forecast of time series.
Nonlin Proc in Geophys. 2005;12:397–406.
schelter2006handbook
Schelter B, Winterhalder M, Timmer J.
Handbook of time series analysis: recent theoretical developments and
applications.
John Wiley and Sons; 2006.
hamilton2016
Hamilton F, Berry T, Sauer T.
Ensemble Kalman filtering without a model.
Physcial Review X. 2016;6:011021.
hamilton2
Hamilton F, Berry T, Sauer T.
Predicting chaotic time series with a partial model.
Physical Review E. 2015;92(1):010902.
berry2016
Berry T, Harlim J.
Semiparametric forecasting and filtering: correcting low-dimensional
model error in parametric models.
Journal of Computational Physics. 2016;308:305–321.
lorenz63
Lorenz E.
Deterministic nonperiodic flow.
J Atmos Sci. 1963;20:130–141.
hindmarsh
Hindmarsh J, Rose R.
A model of neuronal bursting using three coupled first order
differential equations.
Proc Roy Soc. 1984;221:87–102.
constantino
Constantino RF, Desharnais RA, Cushing JM, Dennis B.
Chaotic dynamics in an insect population.
Science. 1997;276:1881–1882.
enkf7
Kalnay E.
Atmospheric modeling, data assimilation, and predictability.
Cambridge Univ. Press; 2003.
evensen
Evensen G.
Data assimilation: The Ensemble Kalman Filter.
Springer: Heidelberg; 2009.
rabier
Rabier F.
Overview of global data assimilation developments in numerical
weather-prediction centres.
Quarterly Journal of the Royal Meteorological Society.
2005;131(613):3215–3233.
cummings
Cummings JA.
Operational multivariate ocean data assimilation.
Quarterly Journal of the Royal Meteorological Society.
2005;131(613):3583–3604.
doi:10.1256/qj.05.105.
yoshida
Yoshida K, Yamaguchi J, Kaneda Y.
Regeneration of Small Eddies by Data Assimilation in Turbulence.
Phys Rev Lett. 2005;94:014501.
stuart
Law K, Stuart A.
Evaluating data stimulation algorithms.
Mon Wea Rev. 2012;140:3757–3782.
schiffbook
Schiff SJ.
Neural control engineering.
MIT Press; 2012.
berry2
Berry T, Sauer T.
Adaptive ensemble Kalman filtering of nonlinear systems.
Tellus A. 2013;65:20331.
hamiltonEPL
Hamilton F, Cressman J, Peixoto N, Sauer T.
Reconstructing neural dynamics using data assimilation with multiple
models.
Europhysics Letters. 2014;107:68005.
hamiltonPRE
Hamilton F, Berry T, Peixoto N, Sauer T.
Real-time tracking of neuronal network structure using data
assimilation.
Physical Review E. 2013;88:052715.
ghanim
Ullah G, Schiff S.
Tracking and control of neuronal Hodgkin-Huxley dynamics.
Phys Rev E. 2009;79:040901.
ghanim2
Ullah G, Schiff S.
Assimilating seizure dynamics.
PLoS Computational Biology. 2010;6:e1000776.
sitz2002
Sitz A, Schwarz U, Kurths J, Voss H.
Estimation of parameters and unobserved components for nonlinear
systems from noisy time series.
Physical Review E. 2002;66:16210.
simon
Simon D.
Optimal State Estimation: Kalman, H_∞, and Nonlinear
Approaches.
John Wiley and Sons; 2006.
julier1
Julier S, Uhlmann J, Durrant-Whyte H.
A new method for the nonlinear transformation of means and
covariances in filters and estimators.
IEEE Trans Automat Control. 2000;45:477–482.
julier2
Julier S, Uhlmann J, Durrant-Whyte H.
Unscented filtering and nonlinear estimation.
Proc IEEE. 2004;92:401–422.
takens
Takens F.
Detecting strange attractors in turbulence.
Lecture Notes in Math Springer-Verlag: Berlin. 1981;898:366–381.
SYC
Sauer T, Yorke JA, Casdagli M.
Embedology.
J Stat Phys. 1991;65:579–616.
perretti
Perretti C, Munch S, Sugihara G.
Model-free forecasting outperforms the correct mechanistic model for
simulated and experimental data.
Proceedings of the National Academy of Sciences. 2013;110:5253–5257.
perretti2
Perretti C, Sugihara G, Munch S.
Nonparametric forecasting outperforms parametric methods for a
simulated multispecies system.
Ecology. 2013;94:794–800.
hamiltonEPJ
Hamilton F, Berry T, Sauer T.
Kalman-Takens filtering in the presence of dynamical noise.
Submitted to European Physical Journal;.
hartig
Hartig F, Dormann C.
Does model-free forecasting really outperform the true model?
Proceedings of the National Academy of Sciences. 2013;110:E3975.
|
http://arxiv.org/abs/1701.08116v1 | 20170127170239 | Quantum Entanglement in Time | [
"Marcin Nowakowski"
] | quant-ph | [
"quant-ph"
] |
Faculty of Applied Physics and Mathematics,
Gdansk University of Technology, 80-952 Gdansk, Poland
National Quantum Information Center of Gdansk, Andersa 27, 81-824 Sopot, Poland
03.67.-a, 03.65.Ud
In this paper we present a concept of quantum entanglement in time in a context of entangled consistent histories. These considerations are supported by presentation of necessary tools closely related to those acting on a space of spatial multipartite quantum states. We show that in similarity to monogamy of quantum entanglement in space, quantum entanglement in time is also endowed with this property for a particular history. Basing on these observations, we discuss further bounding of temporal correlations and derive analytically the Tsirelson bound implied by entangled histories for the Leggett-Garg inequalities.
Quantum Entanglement in Time
Marcin Nowakowski[Electronic address: mnowakowski@mif.pg.gda.pl]
December 30, 2023
====================================================================
§ INTRODUCTION
Recent years have proved a great interest of quantum entanglement concept showing its broad application in quantum communication theory. Spatial quantum correlations and especially their non-locality became a central subject of quantum information theory and their applications to quantum computation, yet potential applications of temporal non-local correlations are poorly analyzed. The crucial issue relates to the very nature of time, thus, temporal correlations phenomenon is a subject of many open questions within the framework of modern quantum and relativistic theories.
Non-local nature of quantum correlations in space has been accepted as a consequence of violation of local realism, expressed in Bell's theorem <cit.> and analyzed in many experiments <cit.>. As an analogy for temporal correlations, the violation of macro-realism <cit.> and Legett-Garg inequalities <cit.> seem to indicate non-local effects in time, and they are a subject of many experimental considerations <cit.>.
In this paper we discuss a variation of the consistent histories approach <cit.> with an introduced concept of entangled histories <cit.> built on a tensor product of projective Hilbert spaces, that can be considered as a potential candidate of a mathematical structure representing quantum states entangled in time.
In particular, we focus on showing that entangled histories demonstrate monogamous properties reflecting the spatial phenomenon.
However, it is crucial to note that in this context many 'obvious' facts about structure and behavior of spatial correlations and tensor algebra of spatial quantum states cannot be easily transferred into the temporal domain as the tensor structure of temporal correlations is richer due to the binding evolution between instances of 'time' and the observation-measurement phenomenon that is also a subject of this paper.
The outline of this paper is as follows: in the first section, we present the key concepts of consistent histories approach <cit.> and present some new concepts related to entangled histories <cit.> which are substantial for analysis of monogamies and entanglement in time as such.
In the section related to monogamy of quantum entanglement in space, we recall the local realistic assumptions about physical reality and their implications articulated as Bell inequalities. We discuss also the concept of monogamy of spatial quantum entanglement.
In the section focused on quantum entanglement in time, we introduce partial trace on quantum histories and show that quantum entanglement in time is monogamous for a particular history. This section considers also this property from a perspective of the Feynman's path integral approach.
In the final section, the Tsirelson bound on quantum correlations in time for Legget-Garg inequalities is derived from the entangled histories.
§ ENTANGLED HISTORIES
The decoherent histories theory (or consistent histories theory) has a long tradition <cit.> and is built on the ground of well known and broadly applied Feynman's path integrals theory <cit.> for calculation of probability amplitudes of quantum processes, especially in quantum field theory or quantum electrodynamics. It is presented also as a generalization of quantum mechanics applied to closed systems such as the universe as a whole and discussed as a necessary element of future quantum gravity theory <cit.>.
For readers interested in deepening this matter, it might be useful to refer to the literature <cit.>. In this section we focus on introduction to the concept of a consistent history and its recent modification, an entangled history <cit.>. We present also a proposal of the temporal partial trace operator <cit.> acting on 𝒞^*-Algebra of history operators as a tool necessary to achieve reduced histories, in similarity to the partial trace operator acting on a multipartite quantum state.
For the sake of the concept of a consistent history, it is substantial to note that for an evolving system (e.g. a non-relativistic particle being in an initial state |ψ_0⟩ which evolution is governed by the Hamiltonian H), we can ask questions about the states of the system at different
times t_1<t_2<...<t_n. It could be performed during the repeating measuring process where a question at time t_x could be represented naturally by a projector P_x. The alternatives at a given time t_x form an exhaustive orthogonal set of projectors {P_x^α_x} where:
∑_α_xP_x^α_x=𝕀
P_x^α_xP_x^α̃_x=δ_α_xα̃_x P_x^α_x
Therefore, the alternative histories could be represented by the sets of alternative operators {P_1^α_1}, {P_2^α_2},…, {P_n^α_n} at different times t_1<t_2<...<t_n. A particular history is then represented as a tensor product Proj(ℋ)∋ |H)=P_n^α_n⊙ P_n-1^α_n-1⊙…⊙ P_1^α_1. This could be perceived that the system had a property P_i^α_i at time t_i <cit.>.
We could interpret that during this process we project the global state of the system onto the n-fold tensor product _i=1^nP_i^α_i achieving a consistent wave function which can be used to deduce probabilities of the events <cit.> in accordance with the Born rule.
The fundamental tool introduced in the consistent history framework which connects different times is the bridging operator <cit.> ℬ(t_2,t_1). It is a counterpart of an unitary evolution operation having the following properties:
ℬ(t_2,t_1)^† = ℬ(t_1,t_2)
ℬ(t_3,t_2)ℬ(t_2,t_1) = ℬ(t_3,t_1)
and can be represented for a unitary quantum evolution as ℬ(t_2,t_1)=exp(-iH(t_2-t_1)) (with the evolution governed be a Hamiltonian H).
Since we assumed for a given time that ∑_α_xP_x^α_x=𝕀, for the sample space of consistent histories |H^α)=P_n^α_n⊙ P_n-1^α_n-1⊙…⊙ P_1^α_1⊙ P_0^α_0 (α=(α_n, α_n-1,…, α_0)) there holds ∑_α|H^α)=𝕀.
Further, the consistent histories formalism introduces the chain operator K(|H^α)) which can be directly associated with a time propagator of a given quantum process:
K(|H^α))=P_n^α_nℬ(t_n,t_n-1) P_n-1^α_n-1…ℬ(t_2,t_1)P_1^α_1ℬ(t_1,t_0)P_0^α_0
Equipped with this operator, one can associate a history |H^α) with its weight:
W(|H^α))=TrK(|H^α))^†K(|H^α))
being by Born rule a counterpart of relative probability and can be interpreted as a probability of a history realization.
As an example, suppose that the system is in a state |ψ_0⟩∈ℋ at time t_0 and evolves to time t_2 under the bridging operator ℬ(t_1,t_0), then applying the Born rule one can determine the probability that the system at time t_1 has a property P_t_1:
Pr(P_t_1,t_1) = P_t_1ℬ(t_1,t_0)|ψ_0⟩^2
= ⟨ψ_0 |ℬ^†(t_1,t_0) P_t_1ℬ(t_1,t_0)|ψ_0⟩
= Tr(ℬ^†(t_1,t_0) P_t_1ℬ(t_1,t_0)[ψ_0])
where [ψ_0]=|ψ_0⟩⟨ψ_0| as discussed further.
The set of histories is coarse-grained as the alternatives are defined for chosen times, yet not for every possible time <cit.>. It means that the set of potential histories is partitioned into the set of mutually exclusive classes called coarse-grained histories, those which are observable during the process of measurements. Coarse graining of measurements is a natural feature of "standard" quantum mechanics. The consistent histories theory describes also fine-grained histories and relations between the sets of coarse-grained and fine-grained histories, however, this is not a subject of this presentation and it does not change generality of the following conclusions.
Recent years show also an extensive discussion about a subject of the so-called consistency or decoherence of allowed histories <cit.> which is directly related to the degree of interference between pairs of histories in the set of histories. The consistent histories framework assumes that the family [The family of consistent histories is such a set of histories ℱ={|H^α)}_α=(α_n, α_n-1,…, α_0) that ∑_α|H^α)=𝕀 and any pair of histories from the set meets the consistency condition.]
of histories is consistent, i.e. one can associate with a union of histories a weight equal to the sum of weights
associated with particular histories included in the union <cit.>. This implies the following consistency condition:
{[ (H^α|H^β)≡ TrK(|H^α))^†K(|H^β))=0 for α≠β; (H^α|H^β)=0 or 1; ∑_αc_α|H^α)=I for c_α∈ℂ; ].
There are different conditions for the so-called decoherence functional TrK(|H^α))^†K(|H^β)) discussed, including the weaker condition that TrK(|H^α))^†K(|H^β))≈δ_αβ P(α) (medium decoherence and P(α) standing for probability of a history |H^α)) or the linear positivity condition by Goldstein and Page <cit.>, however, as observed by F. Wilczek <cit.>, it is unclear at this moment if the variants are significant.
It is helpful to assume normalization of histories with non-zero weight which enables normalization of probability distributions for history events, i.e.: |H)=|H)/√((H|H)) <cit.>.
If the observed system starts its potential history in a pure state
P_t_0=|Ψ_0⟩⟨Ψ_0|, then a consistent set of its histories create a tree-like structure (Fig. 1). Further, the consistency condition implies that the tree branches are mutually orthogonal.
The consistent history framework does not consider non-locality in space or time as such <cit.>, however, since the space of histories spans the complex vector space, we can consider
complex combinations of history vectors, i.e. any history can be represented as <cit.>:
|Ψ)=∑_iα_i|H^i)
where α_i∈ℂ and ℱ∋ |H^i) represents a consistent family of histories which is actually a complex extension of the consistent histories framework.
Having defined above, the histories space can be also equipped with the inner semi-definite product <cit.> between any two histories |Ψ) and |Φ):
(Ψ|Φ)=Tr [K(|Ψ))^†K(|Φ))].
It is fundamental to note that a history |H^α) can be consistent or inconsistent (physically not realizable) basing on the associated evolution ℬ of the system <cit.> as its consistency is verified by means
of the aforementioned inner product engaging bridging operators. Thus, a temporal history is always associated with evolution and for completeness, there should be considered a pair consisting of a family of histories and the bridging operators {ℱ, T}. Whenever we analyze features of a spatial pure quantum state, it is assumed that all necessary knowledge is hidden in the vector |ψ⟩ so actually we analyze only one-element history objects [ψ]=|ψ⟩⟨ψ|
from a perspective of a temporal local frame.
§ MONOGAMY OF QUANTUM ENTANGLEMENT IN SPACE
Quantum entanglement is a phenomenon which does not have any reflection in classical world and as such is a manifestation of the so-called non-locality of quantum correlations. The roots of studies in this matter reach the year 1935 when the famous paper by Einstein, Podolsky and Rosen <cit.> was published on nowadays' called (EPR) pairs, being in a maximally entangled state |Ψ^-⟩=1/√(2)(|01⟩-|10⟩) shared between two particles. In such a case none of the subsystems can have assigned a pure state.
In particular, many entangled states violate local realism and as a consequence, Bell inequalities <cit.>. Local realism has roots in classical world-view where for particular measurement of physical quantities, one believes that the measured physical quantities characterizing a physical object have a priori set values independent of the observers (realism) and for a bipartite setup, the measurement on one site does not influence the results of the other site's measurements (locality):
Realism. The physical quantities being a subject of the measurements have definite real values which exist independent of the observation act.
Locality. The results of measurements performed by Alice do no influence the results of measurements performed by Bob.
It is worth mentioning that the experiment is arranged in such a way that for two parties Alice and Bob, their experiments are causally disconnected. Thus, the measurement performed by Alice cannot influence the measurements done by Bob due to the light speed limit imposed by the special relativity theory.
To analyze correlations between results achieved in the experiment performed by Alice and Bob, imagine that they share a bipartite physical system consisting of two spatially separated sub-systems that could interact in the past and which will be a subject of local measurements in distant laboratories belonging to Alice and Bob respectively (i.e.a distant lab paradigm). Now, we can assign conditional probabilities to the measurement results P(a,b|x,y) where x and y stand for measurement settings set locally by Alice and Bob respectively, and a and b for the measurement outcomes. Note that the measurement outcomes can be naturally inter-dependent, i.e. P(a,b|x,y)≠ P(a|x)P(b|y) - the dependency can be created by a local hidden variable λ∈Λ that the experimenters are not aware of.
The hidden variables are a building block behind Bell inequalities and as such represent a hidden knowledge that cannot be possessed during the measurement process but can influence the measurement results and correlate them. The hidden variable is obviously also pre-defined in accordance with the local realism.
Since the local measurements' results are dependant only on x-settings and λ-variable for Alice and respectively on y-settings and λ-variable for Bob in local hidden variables (LHV) model, and moreover, we assume locality, then:
P(a,b|x,y,λ)=P(a|x,λ)P(b|yλ)
For discrete distribution of λ on Λ-space, after many measurement series we obtain (it reflects a random character of λ in many measurements repeated on the system):
P(a,b|x,y)=∑_λ∈Λp(λ) P(a|x,λ)P(b|y,λ)
For continuous distribution of λ on Λ-space, we get a local hidden variable model:
P(a,b|x,y)=∫_Λ p(λ) P(a|x,λ)P(b|y,λ) dλ
As a consequence of the local realism, every linear combination over such probabilities, meeting local realism conditions, builds the famous Bell inequalities for bipartite setup 𝐁(A,B) of an experiment performed by Alice and Bob. The Bell inequalities can be represented as a linear combination of conditional probabilities P(a,b|x,y) (R is a local realistic bound - a real number):
𝐁(A,B)≡∑_xy∑_abα(a,b,x,y)P(a,b|x,y)≤ R
and α(a,b,x,y)≥ 0 parameters characterize the specific Bell inequality (since any Bell operator is a linear operator over conditional probabilities, one can always re-scale some initially negative α(a,b,x,y) - parameters so that in the re-scaled inequality α(a,b,x,y)≥ 0).
These inequalities have to be met by all classical correlations with the aforementioned probability distributions P(a,b|x,y) built for LHV models.
As observed, many entangled states violate the Bell inequalities, and e.g. the state |Ψ_-⟩=1/√(2)(|01⟩-|10⟩) - a maximally entangled state on ℂ^2⊗ℂ^2 - violates the famous CHSH <cit.> inequality maximally, saturating the Tsirelson bound <cit.>. In general, there exists a Bell inequality for any non-separable state which is violated by this state - this is an implication of the Hahn-Banach theorem for convex sets.
One of the fundamental questions related to quantum entanglement, as a resource shared between two parties Alice and Bob, is whether the correlations could be shared between more parties. The questions is fundamental not only due to applications in quantum computation or quantum cryptography but also due to the very nature of processing information between physical systems at different levels of complexity. It finds out that shareability of quantum correlations is bounded and it has its roots in monogamy of quantum entanglement.
One can refer to a broadly used explanation <cit.> for spatial monogamy of entanglement
between parties ABC. It states that A cannot be simultaneously fully entangled with B and C since then AB would be entangled with C having a mixed density matrix that contradicts purity of the singled state shared between A and B. It is expressed in Coffman-Kundu-Wootters (CKW) <cit.> monogamy inequality for three-qubit system in a state ρ_ABC:
C^2(ρ_A|BC) ≤ C^2(ρ_AB) + C^2(ρ_AC)
where C(·) stands for the concurrence between the parties (e.g. C^2(ρ_A|BC) between A and BC subsystems). C(ρ_AB) is an entanglement monotone, and is defined as the averaged concurrence of an ensemble of pure states {p_i, |Ψ^AB_i⟩} corresponding to ρ_AB minimized over all pure decompositions of ρ_AB=∑_ip_i|Ψ^AB_i⟩⟨Ψ^AB_i| <cit.>:
C(ρ_AB)=inf∑_ip_iC(|Ψ^AB_i⟩)
and respectively for all other states. Concurrence of a pure state is C(|Ψ^AB⟩)=√(2[1-Tr(ρ^2_A)]) and ρ_A=Tr_B|Ψ^AB⟩⟨Ψ^AB|.
If a bipartite state is in a state ρ_AB=|Ψ^-⟩⟨Ψ^-|, then clearly the only possible tripartite extensions are of the form ρ_ABE=ρ_AB⊗ρ, i.e. no symmetric extension of |Ψ^-⟩ exists. That is also an immediate implication of the Schmidt decomposition for any purification of ρ_ABE to a state Ψ_ABEE' which has to be decomposed to a factorized state Ψ_ABEE'=|Ψ^-⟩⊗|Φ_EE'⟩ if for its reduction AB one wants to get ρ_AB=|Ψ^-⟩⟨Ψ^-|. Thus, we get at least two proofs of spatial monogamy of entanglement, one based on entanglement measures and one based on purely geometrical considerations.
§ QUANTUM ENTANGLEMENT IN TIME
We consider in this section a concept of entanglement in time basing on the entangled consistent histories framework.
Since the algebra of histories with ⊙ operation is a form of tensor algebra, it inherits the properties of a standard tensor algebra with ⊗ operation and all mathematical questions valid for vectors representing spatial correlations are mathematically valid for temporal correlations although not necessarily having similar physical interpretation <cit.>.
Quantum entanglement for spatial correlations shared between two parties A and B, say Alice and Bob, denotes that the state ρ_AB∈ℬ(ℋ_A⊗ℋ_B) of their bipartite system cannot be represented as a convex combination ρ_AB=∑_ip_iρ_A^i⊗ρ_B^i (which represents a separable state). The maximally entangled state of a bipartite system shared between Alice and Bob in space is represented as |Ψ_AB⟩=1/√(d)∑_i |ii⟩. For the sake of spatial quantum entanglement, it is crucial to define the reductions of multipartite states and their extensions <cit.>. To find a reduced state ρ_A of a local state possessed e.g. by Alice, it is necessary to trace out Bob's system from the bipartite state ρ_AB which is performed by the partial trace operation:
ρ_A=Tr_Bρ_AB=∑_i⟨ i_B|ρ_AB|i_B ⟩
where the operation can be performed in a computational basis |i_B⟩ of B-subsystem.
We will conduct further a similar reasoning for reductions of entangled histories, defining an operation of a partial trace over chosen times of a particular history state.
Now, it is substantial to note that any history |Y)=F_n⊙…⊙ F_0 can be extended to I⊙ Y as identity I represents a property that is always true and does not introduce additional
knowledge about the system.
Conversely, if one considers reduction of a history to smaller number of time frames, then information about the past and future of the reduced history is lost. Let us consider the potential history of the
physical system |Y_t_n… t_0)=F_n⊙ F_n-1⊙…⊙ F_2⊙ F_1⊙ F_0 on times {t_n… t_0}, then at time t_1 the reduced history is |Y_t_1)=F_1. That was the trivial case of factorizable history, in analogy to factorizable quantum states in space, e.g. for |ϕ_ABE⟩=|ϕ_A⟩⊗|ϕ_B⟩⊗|ϕ_E⟩, the reduction over E results in |ϕ_AB⟩=|ϕ_A⟩⊗|ϕ_B⟩. To find reductions over complex superpositions of histories, it is necessary to define a partial trace operator over a history.
In analogy to partial trace for spatial quantum states, we introduced in <cit.> a partial trace operation on a history state in accordance with general rules of calculating partial traces on tensor algebras:
For a history |Y_t_n… t_0) acting on a space ℋ=ℋ_t_n⊗…⊗ℋ_t_0, a partial trace over times {t_j… t_i+1 t_i} (j≥ i) is:
Tr_t_j… t_i+1t_i |Y_t_n… t_0)(Y_t_n… t_0|=∑_k=1^ℱ (e_k|Y_t_n… t_0)(Y_t_n… t_0|e_k)
where ℱ={|e_k)} creates an orthonormal consistent family of histories on times {t_j… t_i+1 t_i} and the strong consistency condition for partial histories holds for base histories, i.e. (e_i|e_j)=Tr[K(|e_i))^†K(|e_j))]=δ_ij.
We propose further a general form of a maximally entangled history, in similarity to the maximally entangled state of a bipartite system in space, |Ψ_+⟩=1/√(N)∑_i=1^N|i⟩⊗|i⟩, 2≤ N<∞:
A history state 'maximally entangled' in time is represented by:
|Ψ)=1/√(N)∑_i=1^N|e_i)⊙|e_i), 2≤ N<∞
with a trivial bridging operator I and {|e_i)} creating an orthonormal consistent histories family.
It is important to note that one can always employ such a bridging operator that |Ψ) could become intrinsically inconsistent which means it would be dynamically impossible <cit.>, thus, an identity bridging operator is associated with the above state.
It is worth mentioning that recently <cit.> the concept of Bell-like tests have been proposed for experimental analysis of entangled histories.
We further consider the Mach-Zehnder interferometer (Fig. 2, H=1/√(2)[
[ 1 1; 1 -1; ]]) to discuss the matter of monogamy of quantum entanglement in time <cit.>.
In the following let us consider an intrinsically consistent history on times {t_3, t_2, t_1, t_0}:
|Λ)=α([ϕ_3,1]⊙ I_t_2⊙ [ϕ_1,1]+[ϕ_3,2]⊙ I_t_2⊙ [ϕ_1,2])⊙ [ϕ_0]
where α stands for the normalization factor, [ϕ_i,j]=|ϕ_i,j⟩⟨ϕ_i,j| and potentiality of the history means that one can construct a history observable Λ=|Λ)(Λ|.
Now, after tracing out the time t_2, one gets the reduced history on times t_1 and t_3:
|Λ_1)=α̃([ϕ_3,1]⊙ [ϕ_1,1]+[ϕ_3,2]⊙ [ϕ_1,2])
which displays entanglement in time apparently.
Noticeably, we have to show that to be in agreement with the partial trace definition and Feynman propagators' formalism <cit.>, the history |Λ_1) cannot be extracted from
the following |τ GHZ)-like state |Ψ), i.e. |Λ_1)(Λ_1|≠ Tr_t_2|Ψ)(Ψ| <cit.>.
We stress that the history state |Ψ) is also allowed in the setup of the aforementioned interferometer (Fig. 2) as a potential history:
|Ψ)=γ([ϕ_3,1]⊙ [ϕ_2,1]⊙ [ϕ_1,1]+[ϕ_3,2]⊙ [ϕ_2,2]⊙ [ϕ_1,2])
We observe that the reduced history [ϕ_3,1]⊙ [ϕ_1,1] is correlated with [ϕ_2,1] and not with [ϕ_2,2]. Thus, we cannot simply add the histories [ϕ_3,1]⊙ [ϕ_1,1]+[ϕ_3,2]⊙ [ϕ_1,2] as a reduction of |Ψ) over time t_2.
It would imply decorrelation with the next instance of the history in such a case, i.e. it could be always expanded to a history e.g. [ϕ_t_x]⊙([ϕ_3,1]⊙ [ϕ_1,1]+[ϕ_3,2]⊙ [ϕ_1,2]). This result is
in agreement with the Feynman's addition rule for probability amplitudes since this scenario would mean e.g. existence of detectors in the consecutive step performing measurements of the light states.
Imagine that for a maximally entangled history (<ref>) ρ_t_1t_2=|Ψ)(Ψ| on times {t_1, t_2} there exists a purification to a history state |H_t_1t_2t_3t_4) then in accordance with the partial trace definition (<ref>), the maximally entangled history would have to be a reduction of |H_t_1t_2t_3t_4)(H_t_1t_2t_3t_4|, i.e. |Ψ)(Ψ|=∑_i(e_i|⊙ I_t_1t_2 |H_t_1t_2t_3t_4)(H_t_1t_2t_3t_4| I_t_1t_2⊙ |e_i) for some consistent history family ℱ={|e_i)} on times {t_3,t_4} but due to the consistency condition, one gets |Ψ) only if |H_t_1t_2t_3t_4)=|H_t_3t_4)⊙ |Ψ) (for some history |H_t_3t_4)=∑_iγ_i|e_i), where γ_i are complex numbers), otherwise the reduction would be a mixture of some consistent histories from the family. One can also observe immediately that |Ψ)(Ψ|=∑_i(e_i|⊙ I_t_1t_2 |H_t_1t_2t_3t_4)(H_t_1t_2t_3t_4| I_t_1t_2⊙ |e_i) implies that for any family base vector |e_i), |Ψ)=γ_i (e_i|⊙ I_t_1t_2 |H_t_1t_2t_3t_4) (for some complex amplitude γ_i) and (e_i|⊙(Ψ|Ψ')⊙ |e_i)=0 for any orthogonal |Ψ) and |Ψ'). Thus, |H_t_1t_2t_3t_4)=∑_i γ_i|e_i) ⊙ |Ψ).
It is important
to note that these considerations are related to |Ψ)(Ψ| - observable and the particular history |Ψ). Yet, other histories in the Mach-Zehnder interferometer are also accessible. It shows clearly a physical sense
of quantum entanglement in time and further a concept of its monogamy for a particular entangled history.
Therefore, basing on the above observations, we find temporal monogamy phenomenon for a particular entangled history of similar nature to the spatial monogamy of quantum states <cit.>. On the ground of consistent histories approach, it implies that we cannot build a tripartite (i.e. defined on three different times) history state ρ_t_3t_2t_1 where
ρ_t_3t_2=ρ_t_2t_1=|Ψ )( Ψ| and Tr_t_1ρ_t_3t_2t_1=ρ_t_3t_2.
Besides the aforementioned reasoning derived from Feynman's quantum paths, one can refer to a broadly used explanation <cit.> for spatial monogamy of entanglement
between parties ABC (or further {t_3, t_2, t_1} for temporal correlations). As mentioned in previous section, it states that A cannot be simultaneously fully entangled with B and C since then AB would be entangled with C having a mixed density
matrix that contradicts purity of the maximally entangled state shared between A and B.
For the history spaces one can build naturally 𝒞^*-Algebra of history operators equipped with a partial trace operation (<ref>) and follow the same reasoning for entangled histories.
We can summary these considerations with the following corollary about monogamy of temporal entangled histories <cit.>:
Corollary 1.
There does not exist any such a history |H)∈ Proj(ℋ^⊗ n) so that for three chosen times {t_3,t_2,t_1} one can find reduced histories |Ψ_t_3t_2)=1/√(2)(|e_0)⊙|e_0)+|e_1)⊙|e_1)) and |Ψ_t_2t_1)=1/√(2)(|e_0)⊙|e_0)+|e_1)⊙|e_1)).
This lemma holds for any finite dimension n and also for general entangled states of the form (<ref>).
§ BOUNDING TEMPORAL CORRELATIONS
For many years there has been studied the violation of local realism (LR) <cit.> and macrorealism (MR) <cit.> in relation to quantum theories in experimental setups where measurement outputs are tested against violation of Bell inequalities for LR and Leggett-Garg inequalities (LGI) <cit.> for MR. For quantum theories, the former raises as a consequence of non-classical correlations in space while the latter as a consequence of non-classicality of dynamic
evolution. Macrorealism consists of the following assumptions about the reality:
Macrorealism. A physical object is at any 'given' time at a definite quantum state.
Noninvasive measurability. It is possible to determine the state of the object without any effect on the state and the subsequent evolution.
Induction. The properties of an ensemble of quantum states are determined by the initial conditions exclusively (and not by the final conditions.)
In this section we recall the result <cit.> that entangled histories approach gives the same well-known Tsirelson bound <cit.> on quantum correlations for LGI as quantum entangled states in case of bi-partite spatial correlations for CHSH-inequalities which saturates the inequalities by quantum mechanical probability distributions.
We take a temporal version of CHSH inequality which is a modification of Legett-Garg inequalities. Then Alice performs a measurement at time t_1, choosing between two dichotomic observables {A_1^(1), A_2^(1)}. Bob performs a measurement at time t_2 choosing between {B_1^(2), B_2^(2)}.
Then, for such a scenario the Leggett-Garg inequality can be represented in the following form <cit.>:
S_LGI≡ c_12+c_21+c_11-c_22≤ 2
where c_ij=⟨ A_i^(1), B_j^(2)⟩ stands for the expectation value of consecutive measurements performed at time t_1 and t_2.
Since history operators build a 𝒞^*-Algebra for normalized histories from projective Hilbert spaces equipped with a well-defined inner product, one can provide reasoning about bounding the LGI purely on the space of entangled histories, and achieve the quantum bound 2√(2) of CHSH-inequality specific for spatial correlations.
The importance of this analytical result is due to the fact that
previously it was derived basing on convex optimization methods by means of semi-definite programming <cit.> and by means of correlator spaces <cit.> (related to probability conditional distributions
of consecutive events).
We will now recall the theory by B.S. Cirel'son about bounds on Bell's inequalities that is broadly used for finding quantum bounds on spatial Bell-inequalities:
<cit.>
1. There exists 𝒞^*-Algebra 𝒜 with identity, Hermitian operators A_1,…, A_m, B_1,…, B_n∈𝒜 and a state f on 𝒜 so that for every k,l:
A_kB_l=B_lA_k; 𝕀≤ A_k≤𝕀; 𝕀≤ B_l≤𝕀; f(A_kB_l)=c_kl.
2. There exists a density matrix W such that for every k,l:
Tr(A_kB_lW)=c_kl and A_k^2=𝕀; B_l^2=𝕀.
3. There are unit vectors x_1, …, x_m, y_1,…, y_n in a (m+n)-dimensional Euclidean space such that:
⟨ x_k,y_l⟩ = c_kl.
For a temporal setup one considers measurements 𝔸=I ⊙𝔸^(1) (measurement 𝔸 occurring at time t_1) and 𝔹=𝔹^(2)⊙ I, which will in an exact analogy to the proof of the above theorem for a spatial setup <cit.>.
The history with 'injected' measurements could be represented as |H)=α𝔸𝔹|H)𝔸^†𝔹^† where α stands for a normalization factor.
The history observables are history state operators which are Hermitian and their eigenvectors can generate a consistent history family<cit.>.
For an exemplary observable A=∑_ia_i|H_i)(H_i|, its measurement on a history |H) generates an expectation value ⟨ A⟩=Tr(A|H)(H|) (i.e. the result a_i is achieved with probability |(H|H_i)|^2) in analogy to the spatial case.
Thus, one achieves a history |H) as a realized
history with measurements and the expectation value of the history observable ⟨ A ⟩.
It is noticeable that |H) and |H) are both compatible histories, i.e. related by a linear transformation. Thus basing on these results, we can state the following lemma:
For any history density matrix W and Hermitian history dichotomic observables A_i=I⊙ A_i^(1) and B_j= B_j^(2)⊙ I where i,j ∈{1,2} the following bound holds:
S_LGI = c_11+c_12+c_21-c_22
= Tr((A_1B_1+A_1B_2+A_2B_1-A_2B_2)W)
≤ 2√(2)
The proof of this observation can be performed in similarity to the spatial version of CHSH-Bell inequality under assumption that the states are represented by
entangled history states and for two possible measurements {A_1^(1), A_2^(1)} at time t_1 and two measurements {B_1^(1), B_2^(1)} at time t_2. These operators can be of dimension 2×2 meeting the condition A_i^2=B_j^2=I. Therefore, they can be interpreted as spin components along two different directions. In consequence, it is well-known that the above inequality is saturated for 2√(2) for a linear combination of tensor spin correlation that holds also for temporal correlations.
Additionally, one could also apply for this temporal inequality reasoning based on the following obvious finding <cit.> that holds also for the temporal scenario due to the structure of 𝒞^*-Algebra of history operators with ⊙-tensor operation:
A_1B_1+A_1B_2+A_2B_1-A_2B_2 ≤
1/√(2)(A_1^2+A_2^2+B_1^2+B_2^2) ≤
2√(2)I.
§ CONCLUSIONS
In this paper we presented a concept of quantum entanglement in time on the ground of the consistent histories framework in the extended version including entangled histories.
We introduced a necessary partial trace operator over histories which simplify analysis of reduced histories. Moreover, we discussed
monogamous properties of a particular quantum entangled history proving that
quantum entanglement in time has properties similar to quantum entanglement in space.
It has been also pointed out that
a Tsirelson-like bound can be calculated for Leggett-Garg inequalities analytically applying entangled histories which
is a new result in comparison to the limits calculated numerically by means of semi-definite programming.
However, there are still many open problems and questions related to this field. Entangled histories approach is a substantial modification of the original consistent histories approach, especially in relation to the entanglement in time introducing non-locality of time into the theory.
Future research can be focused on analysis of non-locality in time and finding more appropriate mathematical structures that will enable easier analysis of measurements in different reference frames.
Monogamy of entanglement in time and non-locality in time can be most likely applied to quantum cryptography, and should give new insights into non-sequential quantum algorithms and information processing. Finally, this matter is fundamental for understanding relativistic quantum information theory and brings new prospects for quantum gravity theory.
§ ACKNOWLEDGMENTS
Acknowledgments to Pawel Horodecki for critical comments and discussions related to this paper. This work was supported partially by the ERC
grant QOLAPS. Part of this work was performed at the National Quantum Information Center in Gdansk.
*
aipnum-cp
8
Aspect
A. Aspect, J. Dalibard, G. Roger, Phys. Rev. Lett. 49, 1804 (1982).
Bell
J. S. Bell, Physics 1 (3), 195 (1964).
CHSHClauser
J.F. Clauser, M.A. Horne, A. Shimony, and R.A. Holt,
Phys. Rev. Lett. 23, 880 (1969).
Wootters
Valerie Coffman, Joydip Kundu, William K. Wootters, Phys. Rev. A 61, 052306 (2000).
EPR
A. Einstein, B. Podolsky, N. Rosen, Phys. Rev. 47, 777 (1935).
Freedman
S. J. Freedman, J. F. Clauser, Phys. Rev. Lett. 28, 938 (1972).
EX1
C. Robens et al., Phys. Rev. A 5, 011003 (2015).
EX2
A. Asadin, C. Brukner and P. Rabl, Phys. Rev. Lett. 112, 190402 (2014).
EX3
H. Katiyar et al., Phys. Rev. A 87, 052102 (2013).
EX4
A. M. Souza, I. S. Oliveira and R. S. Sarthour, New J. Phys. 13, 053023 (2011).
LGI2
A. J. Leggett, J. Phys.: Condens. Matter 14, R415 (2002).
MNPH4
M. L. Nowakowski, J. Phys. A: Math. Theor. 49, 385301 (2016).
MNPH5
M. L. Nowakowski, Monogamy of quantum entanglement in time, Preprint quant-ph/1604.03976 (2016).
MNPH3
M. Nowakowski, P. Horodecki, in preparation.
Vaidman
Y. Aharonov, L. Vaidman, The two-state vector formalism of quantum mechanics, in Time in Quantum Mechanics, Springer, 369 (2002).
RG1
R. Griffiths, Journal of Statistical Physics 36.1-2, 219-72 (1984).
RG2
R. Griffiths, Phys. Rev. Lett 70, 2201-204 (1993).
RG3
R. Griffiths, Consistent Quantum Theory, Cambridge: Cambridge UP, 2002.
RG4
R. Griffiths, Consistent Quantum Measurements, Preprint quant-ph/1501.04813 (2015).
RG5
R. Griffiths, Phys. Rev. A 54, 2759 (1996).
RG6
R. Griffiths, R. Omn`es, Physics Today
52, 26-31 (1999).
RG7
R. Griffiths, Private communication.
Goldstein
S. Goldstein, D. Page, Phys. Rev. Lett. 74, 3715 (1995).
Hartle1
J. B. Hartle, Generalizing Quantum Mechanics for Quantum Spacetime, The Quantum Structure of Space and Time: ed. by D. Gross, M. Henneaux, and A. Sevrin, World Scientific, Singapore, (2007).
Hartle2
J. B. Hartle, Phys. Rev. A, 70, 02210 (2004).
Hartle3
J. B. Hartle, Phys.Rev. A, 69, 042111 (2004).
Hartle4
M. Gell-Mann, J. B. Hartle, Phys. Rev. A 89, 052125 (2014).
CJ1
C. J. Isham, Journal of Math. Phys. 35, 2157 (1994).
CJ2
C. J. Isham and N. Linden, Journal of Math.
Phys. 35, 5452 (1994).
WC1
J. Cotler, W. Wilczek, Entangled Histories, Preprint quant-ph/1502.02480 (2015).
WC2
J. Cotler, W. Wilczek, Bell Tests for Histories, Preprint quant-ph/1503.06458 (2015).
Wootters
Valerie Coffman, Joydip Kundu, William K. Wootters, Phys. Rev. A 61, 052306 (2000).
Feynman
R. P. Feynman, Space-time approach to non-relativistic quantum mechanics, Rev. Mod. Phys.
20, 367 (1948).
Tsirelson
B. S. Cirel'son, Lett. Math. Phys. 4, 93-100 (1980).
LGI
A. J. Leggett and A. Garg, Phys. Rev. Lett. 54, 857 (1985).
MRealism
A. J. Leggett, J. Phys.: Cond. Mat. 14, R415 (2002).
Vedral
C. Brukner, S. Taylor, S. Cheung, V. Vedral, Quantum Entanglement in Time, Preprint quant-ph/0402127, (2004).
Fritz
T. Fritz, New J. Phys. 12, 083055 (2010).
Budroni
C. Budroni, T. Moroder, M. Kleinmann, O. Ghne, Phys. Rev. Lett. 111, 020403 (2013).
|
http://arxiv.org/abs/1701.07916v1 | 20170127005410 | Condensation of neutral vector bosons with magnetic moment | [
"Gretel Quintero Angulo",
"Aurora Perez Martinez",
"Hugo Perez Rojas"
] | hep-ph | [
"hep-ph"
] |
Condensation of neutral vector bosons with magnetic moment
Gretel Quintero Angulo
Facultad de Física, Universidad de la Habana,
San Lázaro y L, Vedado, La Habana 10400, Cuba
gquintero@fisica.uh.cu
Aurora Pérez Martínez
Departamento de Física Teórica, Instituto de Cibernética Matemática y Física,
Calle E esq 15 No. 309, Vedado, La Habana 10400, Cuba
Instituto de Ciencias Nucleares, Universidad Nacional Autónoma de México,
A. P.70-543, 04510 C. México, México
aurora@icimaf.cu
Hugo Pérez Rojas
Departamento de Física Teórica, Instituto de Cibernética Matemática y Física,
Calle E esq 15 No. 309, Vedado, La Habana 10400, Cuba
hugo@icimaf.cu
December 30, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We study the equation of motion of neutral vector bosons bearing a magnetic moment (MM). The effective rest mass of vector bosons is a decreasing function of the magnetic field intensity. Consequently a diffuse condensation of the bosons appears below a critical value of the field. For typical values of densities and magnetic fields the magnetization is positive and the neutral boson system can maintain a magnetic field self-consistently. A discussion of the relevance in astrophysics is presented.
§ INTRODUCTION
There is a diversity of structures associated with a wide range of magnetic fields (10^-9-10^15 G) cohabiting in our Universe. Salient examples of magnetized structures are galaxies (radius 1.5 × 10^18 km) and compact objects (radius 20 km). Although some theories have been proposed to explain the origin of such magnetic fields, this issue is far from being exhausted and it is still under great debate.
As it is well known, the magnetic field modifies mainly the behavior of matter at the microscopic scale, but there are microscopic effects that lead to a sensitive variation in the macroscopic features.
The actual rate of expansion of our Universe and the light element abundances depend on the magnetic field and put limit to its values at the early universe<cit.>. On the other hand the size and shape of the compact object depend on the magnetic field too<cit.>. There are also some phenomena at Astrophysical scale without explanation, as kicks and jets of pulsars, and magnetic fields could contribute to explain them<cit.>. Furthermore, a magnetic field breaks the rotational symmetry and it is reflected not only in the particle spectra but also in the energy momentum tensor of the system. The latter becomes anisotropic which imply the splitting of the parallel and perpendicular pressures along the magnetic field.
The aim of this work is to study the thermodynamical properties of a neutral spin one boson gas bearing a magnetic moment in presence of a constant magnetic field. Magnetization and Bose-Einstein condensation are discussed. Neutral vector bosons can be mesons and other paired fermions leading to total integer spin. These particles could be relevant components/participants of astrophysical objects/phenomena. In particular, we are pursuing some insights on the nature of jets -usually a stream of matter ejected from a given compact object along its axis of rotation.
We will concern with the phenomenology of a neutral vector boson gas with spin one bearing a MM in a magnetized medium disregarding the realistic conditions in which it may be realized. In particular we consider a positronium gas, characterized by a mass approximately 2m_e, (m_e is the electron mass) and a twice the electron magnetic moment (κ =2μ_B being μ_B the Bohr magneton).
In Ref<cit.> has been discussed the properties of a charged vector bosons gas with the aim to build a model of jets. As we will see below the ground state of neutral bosons has a similar form that the charged one discussed in<cit.>. However the thermodynamical quantities for ground state difieres in the neutral case from the charged one. In the latter case, the momentum component perpendicular to the magnetic field -p_⊥- is quantized and the density of states becomes proportional to eB (2∫ d^3p →∑_n (2-δ_n0) eB/(2π)^2∫ dp_3). In the neutral case there is not quantization in any component of the momentum and the density of states is not proportional to the magnetic field.
Our paper is organized as follows. In Section <ref>, we present the equation of motion and spectrum of neutral vector boson with MM. Section <ref> contains a derivation of the thermodynamical potential and other properties the system, such as Bose-Einstein condensation and self-magnetization. Section is devoted to conclusions.
§ EQUATION OF MOTION OF A NEUTRAL VECTOR BOSON BEARING AN MM
The Lagrangian density of a charged spin-one particle with MM that moves in a magnetic field is
L = 1/2 U_μν^† (∂_μ U_ν-∂_ν U_μ) +
1/2 (∂_μ U_ν^†-∂_ν U_μ^†) U_μν -
1/2 U_μν^† U_μν + m^2 U_μ^† U_μ
+
i m κ(U_μ^† U_ν-U_ν^† U_μ) F_μν,
Eq. (<ref>) is an extension of the original Proca Lagragian for spin one particles that includes the interaction of the bosons with an external field <cit.>. The indices μ and ν run from 1 to 4, F^μν is the electromagnetic tensor and U_μν, U_μ are independent field variables with equations <cit.>
∂_μ U_μν-m^2 U_ν+ 2i κ m U_μ F_μν=0,
U_μν = ∂_μ U_ν - ∂_ν U_μ.
In the momentum space the field equation is
((p_μ^2 + m^2)δ_μν -p_μ p_ν - 2 i κ m F_μν)U_μ = 0.
Then, the boson propagator read as
D_μν^-1=((p_μ^2 + m^2)δ_μν-p_μ p_ν - 2 i κ m F_μν).
Starting from Eqs. (<ref>) and following the same procedure of Ref. <cit.> we can obtain the generalized Sakata-Taketani equation for a six-component wave function
and the Hamiltonian <cit.> of the system
H = σ_3 m + (σ_3 + i σ_2) p^2/2 m -
i σ_2 (p·S)^2/m
-(σ_3 - i σ_2) κS·B,
where p=(p_⊥,p_3), p_⊥=p_1^2 + p_2^2, σ_i are the 2×2 Pauli
matrices,
[
[ σ_1=
(
[ 0 1; 1 0 ]), iσ_2=
(
[ 0 1; -1 0 ]), σ_3=
(
[ 1 0; 0 -1 ]) ]
]
S_i are the 3×3 spin-1 matrices in a representation in which S_3 is diagonal and S = {S_1,S_2,S_3}[
[ S_1=1/√(2)( [ 0 1 0; 1 0 1; 0 1 0 ]), S_2=i/√(2)( [ 0 -1 0; 1 0 -1; 0 1 0 ]), S_3=
( [ 1 0 0; 0 0 0; 0 0 -1 ]) ]]. The magnetic field is considered uniform, constant and in p_3 direction B=Be_3.
The equations of motion for the momentum p⃗ and the position r⃗ can be obtained from:
∂p⃗/∂ t = i [H,p⃗]
∂r⃗/∂ t = i [H,r⃗]
and they read:
∂p⃗/∂ t=0⃗,
m ∂r⃗/∂ t= (σ_3 - i σ_2) p⃗ + i σ_2 [S⃗, p⃗. S⃗].
Here [a,b] = ab-ba is the commutator of a and b.
From Eq.(<ref>) follows that the neutral bosons move freely in the direction parallel to the field as well as in the perpendicular one. This is a difference with respect to the charged vector boson case, in which the perpendicular component of the momentum is quantized <cit.>.
The eigenvalues of Eq.(<ref>) are
E(p_⊥,p_3, B)=√(p_3^2+p_⊥^2+m^2-2κ s B√(p_⊥^2+m^2)),
where s are the spin eigenvalues s=0, ± 1. The effective rest mass of the neutral spin one boson (s=1 and p_3=p_⊥=0) has the form
E(0, B)=√(m^2-2κ B m).
Eq. (<ref>) shows a decreasing behavior when the magnetic field increase and a critical magnetic field is obtained at B_c=m/2κ. As m=2m_e and κ=2μ_B, B_c=m_e^2/e=4.41 × 10^13 G which is the Schwinger critical field. We can define the effective magnetic moment as
d=-∂ E/∂ B=κ m/√(m^2-2mκ B),
as we can see from (<ref>) the system has a paramagnetic behavior because d>0 and increase with the increasing of the magnetic field. It has a singularity when B=B_c.
§ THERMODYNAMICAL PROPERTIES OF A BOSON GAS
The general expression for the thermodynamical potential of a neutral vector boson gas has the form
Ω(B,μ,T)= 1/β[∑_p_4∫_-∞^∞p_⊥dp_⊥dp_3/(2π)^2ln D^-1(p^*)].
Here D^-1(p^*) is the neutral boson propagator given by Eq. (<ref>), β = 1/T denotes the inverse temperature, μ the boson chemical potential and p^*=(ip^4-μ,0,p_⊥,p_3).
After doing the Matsubara sum the expression (<ref>) becomes
Ω(B,μ,T)= 1/β[∫_0^∞p_⊥dp_⊥dp_3/(2π)^2[ ln f^-f^+ +E/β] ].
The logarithm terms are called the statistical contribution of bosons/antibosons, they depend on B, T and μ. The term E/β is only B-dependent and is called the vacuum term. For magnetic field below B_c the leading term of thermodynamical potential is the statistical one so we neglect the vacuum term in what follows. The functions f^± are
f^±=1-e^-(E∓μ)β.
The density of bosons can be calculated as
N=-∂Ω/∂ B =∫_-∞^∞p_⊥dp_⊥dp_3/(2π)^2(n^+-n^-),
where n^± are the Bose-Einstein distribution of bosons and antibosons as
n^±=1/e^(E∓μ)β-1.
We focus in Astrophysical scenarios, so in our approach we consider the "cold" degenerate boson gas where μ>T which correspond to the degenerate limit and antibosons are neglected. Besides, we will concentrate in the study of the ground state phase determined by Eq(<ref>). We have also consider that our system is one dimensional (p_⊥=0), therefore the boson distribution is only a function of p_3 and the integral for particle density has the form
N=∫_-∞^∞dp_3/2π n^+(p_3).
For T→ 0, μ→ E(0,B) the particle distribution Eq.(<ref>) concentrate around p_3=0. In this limit the particle density Eq.(<ref>) can be approximated as
N ≃1/πβ∫_0^p_0dp_3/√(p_3^2+m^2-2 κ B m)-μ≃1/πβ∫_0^p_02 E(0,B) dp_3/p_3^2+ E(0,B)^2-μ^2,
N ≃1/2 β√(2 E(0,B)/-μ^').
Here we have defined μ^' = μ - E(0,B) and p_0 is some characteristic moment such as p_0 >> √(-2 E(0,B) μ^'). We expect that μ^'→ 0 when T → 0, because this is equivalent to
the condensation condition μ→ E(0,B). An expression for μ^'(T) can be obtained from Eq. (<ref>)
μ^' = -E(0,B) T^2/2 N^2
From Eq.(<ref>) one can see that μ^' is a decreasing function of T, as occurs in the usual Bose-Einstein condensation.
By the use of Eq.(<ref>) we can approximate n^+(p_3) in a vicinity of p_3 =0 as
n^+(p_3) ≃2 E(0,B) T/p^2 +E(0,B)^2 - μ^2≃2 E(0,B) T/p^2 -2 E(0,B) μ^'
n^+(p_3) ≃ 2 N γ/p_3^2 + γ^2
where γ = √(- 2 E(0,B) μ^'). Eq. (<ref>) means that for p_3 ≈ 0, n^+(p_3) can be approximated by a
Cauchy distribution centered in p_3=0. Now the equivalent to the T → 0 limit is γ→ 0
lim_γ→ 0 n^+(p_3) = 2 N δ(p_3),
and finally we have the following expression for the particle density N in the vicinity of p_3=0
N≃1/2lim_γ→ 0∫_-∞^∞ 2 N γ/p_3^2 + γ^2 dp_3 = N ∫_-∞^∞δ(p_3)dp_3.
In the limit T → 0 Eq. (<ref>) shows that all the bosons fall in the condensate since the total density of bosons depends on the δ(p_3), but there are not a critical temperature for condensation to start and it is reached essentially in the ground state. This condensation is called diffuse <cit.>.
Fig 1 shows the δ-behavior around p_3∼ 0 of density of bosons Eq.(<ref>).
In left panel of Fig 1 is illustrated the density of bosons as a function of p_3 for a fixed value of temperature T=10^7 K and several values of the magnetic field. The curves move to the left when the magnetic field grows.
Right panel of Fig 1 shows the density of bosons as a function of p_3 for a fixed value of the magnetic field B=10^13 G and different values of the temperatures. The curves shift to the left with the decreasing of the temperature.
Eq. (<ref>) also allows us to compute the thermodynamical potential and the magnetization as
Ω = 1/β√(E(0,B)^2-μ^2),
M=-∂Ω/∂ B= κ m/E(0,B) N = N d.
In Fig. 2 left the magnetization (Eq.(<ref>)) is plotted as a function of magnetic field for several values of the particle density.
We are interested in search if the system reaches the self-magnetization condition considering H = B-4 π M with H=0 and solving the self-consistent equation B=4π M. This is a cubic equation due to the non linear dependency of the magnetization on the field. In Fig 2 right its three solutions have been plotted but only one of them is physically meaningful. For one of the roots, the magnetic field is negative (see dotted line), while for another, it decreases with the increasing density, reaching B_c when N goes to zero (dashed line). These solutions implies that the magnetization also decreases with N and it is contrary to Eq. (<ref>). Both roots are nonphysical, and hence, they must be discarded. Therefore, the only admissible solution of the self-magnetization equation is given by the solid line. The solution becomes complex for densities higher than N_c = 7.14 × 10^34cm^-3 which corresponds to a value of the magnetic field of 2/3× B_c. Being the magnetization a positive quantity, the points of the solid line are the values of the self-maintained magnetic field and shows the ferromagnetic behavior of the boson gas.
§ CONCLUSIONS
We studied the equation of motion of a neutral vector boson with MM in presence of magnetic field. The spectrum is calculated starting from the generalized Sakata-Taketani hamiltonian<cit.>.
The thermodynamical potential of the boson gas in the ground state is studied. The gas exhibits a “difuse” Bose Einstein condensation characterized by the non existence of critical temperature for the phase transition and the growing population of the ground state when the magnetic field increases or the temperature decreases.
The magnetization of the system is also calculated for the ground state, it is a positive quantity and grows with field up to B_c where it diverges. For particle densities under a critical value the system can maintain the magnetic field because the self-magnetization equation has a positive solution. This phenomenon appears for values of densities and magnetic fields that are typical of compact objects and it could have relevance in modelling jets as well as a mechanism to sustain the strong magnetic field in compact objects. These models deserve a separate work which is in progress.
1
Grasso199673
D. Grasso and H. R. Rubinstein, Physics Letters B 379, 73 (1996).
0954-3899-36-7-075202
R. G. Felipe and A. P. Martínez, Journal of Physics G: Nuclear and
Particle Physics 36, p. 075202 (2009).
MNL2:MNL2848
J. Charbonneau, K. Hoffman and J. Heyl, Monthly Notices of the Royal
Astronomical Society: Letters 404, L119 (2010).
Charbonneau:2009ax
J. Charbonneau and A. Zhitnitsky, JCAP 1008, p. 010 (2010).
Elizabeth H. Perez Rojas, E. Rodriguez Querts, A. Perez Martinez Conference Series (Quantum Relativistic electron gas expanding in one dimension), (2017).
PhysRev.131.2326
J. A. Young and S. A. Bludman, Phys. Rev. 131, 2326 (Sep 1963).
PhysRevD.89.121701
A. J. Silenko, Phys. Rev. D 89, p. 121701 (Jun 2014).
ROJAS1996148
H. Rojas, Physics Letters B 379, 148 (1996).
PEREZROJAS2000
H. Perez Rojas and L. Villegas-Lelovski, Brazilian Journal of Physics
30, 410 (06 2000).
Khalilov2001
V. R. Khalilov, Theoretical and Mathematical Physics 129, 1357
(2001).
|
http://arxiv.org/abs/1701.08062v1 | 20170127143113 | Binding energies and pairing gaps in semi-magic nuclei obtained using new regularized higher-order EDF generators | [
"K. Bennaceur",
"J. Dobaczewski",
"Y. Gao"
] | nucl-th | [
"nucl-th"
] |
GBgbsn
^1Univ Lyon, Université Lyon 1, CNRS/IN2P3, IPNL, F-69622 Villeurbanne, France
^2Department of Physics, PO Box 35 (YFL), FI-40014 University of Jyväskylä, Finland
^3Helsinki Institute of Physics, P.O. Box 64, FI-00014 University of Helsinki, Finland
^4Department of Physics, University of York, Heslington, York YO10 5DD, United Kingdom
^5Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, ul. Pasteura 5, PL-02-093 Warsaw, Poland
^6State Key Laboratory of Nuclear Physics and Technology, School of Physics, Peking University, Beijing 100871, China
We present results of the Hartree-Fock-Bogolyubov calculations
performed using nuclear energy density functionals based on
regularized functional generators at next-to-leading and
next-to-next-to-leading order. We discuss properties of binding
energies and pairing gaps determined in semi-magic spherical nuclei.
The results are compared with benchmark calculations performed for
the functional generator SLyMR0 and functional UNEDF0.
§ INTRODUCTION
A quest for energy density functionals (EDFs) that would precisely
and accurately describe multitude of low-energy nuclear properties is
at present one of the most important research avenues in nuclear
physics. The problem has been addressed in quite a number of recent
studies,<cit.>
where various extensions of the standard EDFs, used for the last 60-odd
years, were proposed.
In this conference communication, we present results of calculations
performed for semi-magic nuclei across the mass chart, using the newly
developed EDFs based on the regularized higher-order generators with
pairing.<cit.> As discussed in Ref., the proposed
new parametrizations at next-to-leading order (NLO) REG2c.161026 and
next-to-next-to-leading (N^2LO) REG4c.161026 correspond to a
fairly low effective mass and overestimate pairing strength.
Therefore, they are not good enough to embark for them on massive
mass-table calculations. However, inexpensive calculations performed
for spherical semi-magic nuclei, which are reported on in this paper,
can constitute a useful illustration of the overall bulk properties
corresponding to the newly developed EDFs.
§ RESULTS
We performed the Hartree-Fock-Bogolyubov (HFB) calculations for all
bound semi-magic nuclei across the mass chart, using the NLO
REG2c.161026 and N^2LO REG4c.161026 EDFs,<cit.>)
SLyMR0,<cit.>) and UNEDF0.<cit.>)
For UNEDF0, the Lipkin-Nogami (LN) method was used to account for
approximate particle-number restoration as in Ref..
For the finite-range NLO and N^2LO generators, we solved the
non-local self-consistent equations using the newly developed code
finres_4 (Finite-Range Self-consistent Spherical
Space-coordinate Solver),<cit.> which is based on the method
proposed by Hooverman.<cit.> For the zero-range generator
SLyMR0, we obtained the solutions using the spherical
solver lenteur,<cit.> and for the
quasi-local functional UNEDF0, using the code
hosphe,<cit.> similarly as in
Ref..
For NLO, N^2LO, and UNEDF0, where the fit covariance matrices are
known, we determined statistical uncertainties of all calculated
observables according to the methodology presented in
Ref.. For SLyMR0, only the values of observables
were determined.
In Figs. <ref> and <ref>, for nuclei where the
binding energies are known from experiment or
systematics,<cit.>) we show the binding-energy residuals
determined for the four studied EDFs. We note that for SLyMR0,
the obtained differences between theory and experiment are
significantly larger than those obtained for the other three EDFs, and
therefore, in the figures they were divided by a factor of five.
We also note that NLO, N^2LO, and SLyMR0 EDFs are
built as exact averages of two-body or many-body generators, both in
the particle-hole and pairing channels, and therefore, they are suitable
without ambiguity for beyond-mean-field and symmetry-restoration calculations.
On the other hand, EDF UNEDF0 is built as an average of a density-dependent
two-body Skyrme-type generators that are different in particle-hole
and pairing channels and with some omitted terms in the particle-hole channel.
Although the pattern of comparison with data is fairly different
among all four studied EDFs, we clearly see that the new NLO and
N^2LO EDFs describe data better than SLyMR0 (recall the scaling
factor of five used for SLyMR0). However, results obtained at NLO and
N^2LO are still fairly worse than those obtained for the standard
Skyrme-like EDF UNEDF0. In particular, for the NLO and N^2LO EDFs,
we see conspicuous “arches” of residuals between the doubly magic
nuclei. Usually these feature of calculated ground-state energies was
attributed to low effective mass, however, we note here that for
SLyMR0, which has a low effective mass too, no such an effect is
seen. We also note that the statistical uncertainties obtained for
UNEDF0 significantly increase when going towards neutron-rich
nuclei,<cit.> whereas those for the NLO and N^2LO EDFs
depend on the neutron excess much less.
In Figs. <ref> and <ref>, we show neutron and
proton pairing gaps calculated as pairing fields averaged with
density matrices,<cit.> and for the LN method (UNEDF0),
corrected by adding the corresponding λ_2 LN
parameters.<cit.> As discussed in Ref., for
the NLO and N^2LO EDFs, pairing correlations were adjusted to
values largely overestimating experimental data. This feature is
clearly visible in the figures, and should certainly be improved upon
in future planned adjustment of parameters. We also see that the HFB
results shown for the NLO, N^2LO, and SLyMR0 EDFs exhibit
unphysical breaking of pairing at doubly magic gaps, which is a
feature related to the lack of particle-number restoration, and thus
is absent in the HFB+LN results shown for UNEDF0.
§ CONCLUSIONS
In this article we presented the binding-energy residuals and average
pairing gaps obtained for semi-magic nuclei using two recently
adjusted finite-range pseudopotentials at NLO and
N^2LO, as well as the EDF UNEDF0 and
zero-range pseudopotential SLyMR0. For all cases but SLyMR0, the
propagated statistical errors of observables were calculated.
For the set of nuclei considered here, and for finite-range pseudopotentials, the average deviations between
experimental and calculated binding energies
are larger than those obtained for EDF UNEDF0, but they are significantly smaller than the ones
obtained for the zero-range pseudopotential SLyMR0. For the NLO and N^2LO EDFs, the typical
arches that appear in the binding-energy residuals between major
shells might be
due to their very low effective masses (close to 0.4). However, the fact that
similar arches do not appear for SLyMR0, which has an effective mass
of 0.47, questions this conjecture.
Based on the results obtained for binding-energy residuals and
average pairing gaps, there was no clear improvement when going from
the finite-range pseudopotential at NLO to the one at N^2LO. This
does not necessarily mean that the additional degrees of freedom
introduced at N^2LO are not relevant, but most likely reflects the
fact that the penalty function based on spherical doubly magic nuclei
did not allow for properly constraining them.
The next step in the development of this family of pseudopotentials
will consist in increasing the effective mass. The obvious way to do
this is to introduce three-body terms in the pseudopotential. The
present computational ressources restrict this extension to the
introduction of zero-range three-body terms in the spirit of the work
of Onishi and Negele <cit.> and will require the use of
a cut-off to prevent the divergence of the energy. The work along
these lines is in progress.
§ ACKNOWLEDGMENTS
This work was supported by the Academy of Finland, the University of
Jyväskylä within the FIDIPRO program, by the CNRS/IN2P3
through PICS No. 6949.
ws-procs9x6
10
(Car08)
B. G. Carlsson, J. Dobaczewski and M. Kortelainen, Local nuclear energy density
functional at next-to-next-to-next-to-leading order, Phys. Rev. C 78, p. 044326 (2008).
(Zal08)
M. Zalewski, J. Dobaczewski, W. Satuła and T. R. Werner, Spin-orbit and
tensor mean-field effects on spin-orbit splitting including self-consistent
core polarizations, Phys. Rev. C 77, p. 024316 (Feb 2008).
(Rai11)
F. Raimondi, B. G. Carlsson and J. Dobaczewski, Effective pseudopotential for
energy density functionals with higher-order derivatives, Phys. Rev. C
83, p. 054311 (2011).
(Dob12)
J. Dobaczewski, K. Bennaceur and F. Raimondi, Effective theory for low-energy
nuclear energy density functionals, J. Phys. G 39, p. 125103
(2012).
(Rai14)
F. Raimondi, K. Bennaceur and J. Dobaczewski, Nonlocal energy density
functionals for low-energy nuclear structure, J. Phys. G: Nucl. Part.
Phys. 41, p. 055112 (2014).
(Ben14a)
K. Bennaceur, J. Dobaczewski and F. Raimondi, Extended skyrme pseudopotential
deduced from infinite nuclear matter properties, EPJ Web of Conf. 66, p. 02031 (2014).
(Sad13)
J. Sadoudi, M. Bender, K. Bennaceur, D. Davesne, R. Jodon and T. Duguet, Skyrme
pseudo-potential-based EDF parametrization for spuriosity-free MR EDF
calculations, Phys. Scr. T154, p. 014013 (2013).
(Sad13b)
J. Sadoudi, T. Duguet, J. Meyer and M. Bender, Skyrme functional from a
three-body pseudopotential of second order in gradients: Formalism for
central terms, Phys. Rev. C 88, p. 064326 (2013).
(Dav15)
D. Davesne, J. Navarro, P. Becker, R. Jodon, J. Meyer and A. Pastore, Extended
skyrme pseudopotential deduced from infinite nuclear matter properties, Phys. Rev. C 91, p. 064303 (Jun 2015).
(Ben17)
K. Bennaceur, A. Idini, J. Dobaczewski, P. Dobaczewski, M. Kortelainen and
F. Raimondi, arXiv:1611.09311.
(Kor10b)
M. Kortelainen, T. Lesinski, J. Moré, W. Nazarewicz, J. Sarich, N. Schunck,
M. V. Stoitsov and S. Wild, Nuclear energy density optimization, Phys.
Rev. C 82, p. 024313 (Aug 2010).
(Sto03)
M. V. Stoitsov, J. Dobaczewski, W. Nazarewicz, S. Pittel and D. J. Dean,
Systematic study of deformed nuclei at the drip lines and beyond, Phys. Rev. C 68, p. 054312 (2003).
[Ben17a]
Bennaceur K. et al. 2017, to be submitted to Computer Physics
Communications.
(Hoo72)
R. H. Hooverman, A technique for numerical solution of the Schroedinger
equation with non-local potentials, Nuclear Physics A 189, 155
(1972).
(lenteur)
K. Bennaceur, lenteur HFB code unpublished.
(Car10b)
B. Carlsson, J. Dobaczewski, J. Toivanen and P. Veselý, Solution of
self-consistent equations for the N3LO nuclear energy density functional in
spherical symmetry. The program hosphe (v1.02), Comput. Phys. Comm.
181, p. 1641 (2010).
[Car13b]
B. G. Carlsson, J. Toivanen, P. Veselý, and Y. Gao, to be published.
(Gao13)
Y. Gao, J. Dobaczewski, M. Kortelainen, J. Toivanen and D. Tarpanov,
Propagation of uncertainties in the skyrme energy-density-functional model,
Phys. Rev. C 87, p. 034324 (2013).
(Wan12)
M. Wang, G. Audi, A. H. Wapstra, F. G. Kondev, M. MacCormick, X. Xu and
B. Pfeiffer, The AME2012 atomic mass evaluation (ii). tables, graphs and
references, Chin. Phys. C 36, 1603 (2012).
(Dob14)
J. Dobaczewski, W. Nazarewicz and P.-G. Reinhard, Error Estimates of
Theoretical Models: a Guide, J. Phys. G: Nucl. Part. Phys. 41, p. 074001 (2014).
(Sat98)
W. Satuła, J. Dobaczewski and W. Nazarewicz, Odd-even staggering of nuclear
masses: Pairing or shape effect?, Phys. Rev. Lett. 81, 3599
(1998).
ONISHI1978336
N. Onishi and J. Negele, Two-body and three-body effective interactions in
nuclei, Nuclear Physics A 301, 336 (1978).
|
http://arxiv.org/abs/1701.07887v1 | 20170126215220 | A Comparison of Two Methods for Estimating Black Hole Spin in Active Galactic Nuclei | [
"Daniel M. Capellupo",
"Gaylor Wafflard-Fernandez",
"Daryl Haggard"
] | astro-ph.GA | [
"astro-ph.GA",
"astro-ph.HE"
] |
Department of Physics, McGill University, Montreal, QC, H3A 2T8, Canada
McGill Space Institute, McGill University, Montreal, QC, H3A 2A7, Canada
Department of Physics, Université Paris-Sud, Orsay, France
Department of Physics, McGill University, Montreal, QC, H3A 2T8, Canada
McGill Space Institute, McGill University, Montreal, QC, H3A 2A7, Canada
1danielc@physics.mcgill.ca
Angular momentum, or spin, is a fundamental property of black holes (BHs), yet
it is much more difficult to estimate than mass or accretion rate (for actively
accreting systems). In recent years, high-quality X-ray observations have
allowed for detailed measurements of the Fe Kα emission line, where
relativistic line broadening allows constraints on the spin parameter (the
X-ray reflection method). Another technique uses accretion disk models to fit
the AGN continuum emission (the continuum-fitting, or CF, method). Although
each technique has model-dependent uncertainties, these are the best empirical
tools currently available and should be vetted in systems where both techniques
can be applied. A detailed comparison of the two methods is also useful because
neither method can be applied to all AGN. The X-ray reflection technique
targets mostly local (z ≲ 0.1) systems, while the CF method can be
applied at higher redshift, up to and beyond the peak of AGN activity and
growth. Here, we apply the CF method to two AGN with X-ray reflection
measurements. For both the high-mass AGN, H1821+643, and the Seyfert 1,
NGC 3783, we find a range in spin parameter consistent with the X-ray
reflection measurements. However, the near-maximal spin favored by the
reflection method for NGC 3783 is more probable if we add a disk wind to the
model. Refinement of these techniques, together with improved X-ray
measurements and tighter BH mass constraints, will permit this comparison in a
larger sample of AGN and increase our confidence in these spin estimation
techniques.
§ INTRODUCTION
Actively accreting black holes have three fundamental properties – mass
(M_BH), accretion rate (Ṁ), and angular momentum. Measuring
M_BH for active galactic nuclei at all redshifts has become possible due to
reverberation mapping of low-redshift AGN and the extrapolation of those
results to high redshifts, via relations between M_BH and the widths of
broad emission lines and the AGN continuum luminosity. Accretion rate estimates
have also been achieved for many AGN, usually via the Eddington ratio,
L/L_Edd.
The angular momentum, or spin (a_*), of active BHs is more elusive, as it
requires probing the region near the inner edge of the accretion disk (AD). Yet
measurements of spin and spin evolution would provide valuable clues to the
accretion history of active BHs and perhaps the evolution of the AGN and host
galaxies themselves.
At present, there are two primary methods for constraining the spin parameters
of actively accreting BHs: (1) measuring the Fe Kα emission line and/or
a soft X-ray excess that some attribute to relativistic reflection
<cit.>, and (2) fitting the AGN continuum
emission (CF) <cit.>. There are significant
advantages and drawbacks to each method.
The Fe Kα method is based on relativistic X-ray reflection. It does not
require prior knowledge of M_BH, the distance to the source, or the
inclination of the disk, whereas these are all necessary ingredients for the CF
method. The main drawback, however, is that a very high-quality X-ray spectrum
is required to properly model the continuum emission and the Fe-Kα
emission line, severely limiting the number of sources for which current
technology allows a spin measurement. As a result, most AGN with reflection
measurements are at a redshift less than 0.1. Furthermore, the Fe Kα
emission line is present in just ∼40% of bright, nearby type I AGN
<cit.>, so some spin estimates are based on modeling just a
soft X-ray excess <cit.>.
The CF method, on the other hand, can be applied to any AGN where the continuum
emission can be measured. This vastly increases the number of AGN for which a
spin measurement can be made and has already been applied out to a redshift of
∼1.5 <cit.>. The primary drawback is that wide
wavelength coverage, sometimes exceeding the capabilities of a single
observatory, is required to properly measure the shape of the SED. Furthermore,
this method cannot be applied effectively if the peak of the AD spectrum occurs
in a wavelength regime inaccessible to current observatories, e.g., the
extreme UV (where many AGN spectra do indeed peak). This method generally
assumes a thin AD model, based on <cit.>.
Recent work has directly cast doubt on the X-ray reflection method.
<cit.> find that the soft X-ray excess that some attribute to
relativistic reflection is more likely due to warm Comptonization. Similarly,
<cit.> is able to fit the Fe Kα emission line for one of the
AGN with an X-ray reflection spin measurement without invoking relativistic
reflection. For the CF method, while the standard thin AD model has been
successful in fitting the UV-optical SEDs of many AGN
<cit.>, other work has found that the AGN
SED can be fit with the combination of a thermal disk component and a warm
Comptonization component <cit.>, indicating the possibility of
greater complexity in the continuum emission.
With these two methods now available and actively in use for the estimation
of a_* in AGN, it is time to investigate whether these two methods give
consistent results when applied to the same AGN. This is especially important
given the uncertainties in both techniques and because neither method can probe
the full AGN population.
In this work, we compare the X-ray reflection and CF methods for two nearby AGN
– H1821+643 and NGC 3783. Ours is among the first attempts to make this
comparison <cit.>. Both targets have a published spin
estimate from the reflection method. We perform the CF analysis and compare the
results in detail. In 2, we describe how we selected sources for this study
and our search for appropriate archival data. In 3, we describe the models
and CF procedure (based on ). 4 and 5
describe our application of the CF method to the two AGN, and we conclude in
6 with a discussion of our results and how the reflection and CF method
compare for these two case studies. We assume a ΛCDM model with
Ω_Λ=0.7, Ω_m=0.3, and
H_0=70 km s^-1 Mpc^-1.
§ SAMPLE SELECTION AND DATA SOURCES
According to <cit.>, there are currently 25 AGN with spin
estimates from the X-ray reflection method. We use this list as a starting
point to search for archival data to which the CF method can be applied.
The CF method is most effective when the “turnover” in the AD spectrum is
probed. This turnover occurs at shorter wavelengths for smaller black hole
masses. We therefore look first for existing high-quality UV spectroscopic
observations of these AGN. Via the MAST web portal[<https://archive.stsci.edu/>],
we identify four AGN with high-level data products for Hubble Space Telescope
(HST) Faint Object Spectrograph (FOS) observations <cit.>: Fairall 9,
NGC 3783, NGC 4151, and H1821+643.
While the FOS spectrum is sufficient for applying the CF method for H1821+643,
data at even shorter wavelengths is required for the lower–M_BH Seyfert
galaxies. We seek quasi-simultaneous data, and, for NGC 3783, we identify
observations from ROSAT – taken on 1992 July 23, just four days prior to the
FOS observation on 1992 July 27 – that probe the appropriate wavelengths
<cit.>. Hence we proceed with two objects, H1821+643 and NGC 3783,
for our detailed spin comparison.
The FOS spectra for H1821+643 and NGC 3783 are focused on the nucleus of the
galaxy. For H1821+643, we verify that the FOS spectrum (shown in
Fig. <ref>) is dominated by AGN emission based on the broad-band star
formation SED fit in <cit.>. Similarly for NGC 3783, the spectrum is
at short enough wavelengths that the host galaxy contribution should be
negligible <cit.>. Therefore, we do not correct the FOS
spectra for stellar emission.
The only correction we make to the HST data is to divide out the Galactic
extinction, using the <cit.> extinction law and the
<cit.> recalibration of the <cit.> maps.
The ROSAT data have been analyzed <cit.>, and we
make no further corrections in this work.
§ ACCRETION DISK MODELS AND BAYESIAN ROUTINE
To apply the CF method, a model is required that can make specific predictions
for the emitted radiation at each wavelength. Standard thin AD theory
<cit.> has been used for several decades to describe AGN continuum
emission. Newer models use this framework, but incorporate general relativistic
corrections, comptonization in the disk atmosphere, and even disk winds
<cit.>. Here we adopt the numerical code
described in <cit.>, assuming a viscosity parameter (α) of 0.1.
The shape and luminosity of the thin AD spectrum is mainly set by M_BH,
Ṁ, a_*, and the inclination of the disk to our line-of-sight.
If we want to constrain a_*, prior knowledge of the other parameters is
necessary as the observed SED is not enough to break the parameter degeneracy
of the models, where different combinations of these parameters can yield
similar SED shapes. Additionally, any intrinsic reddening in the AGN host
galaxy will affect the observed SED shape.
We therefore adopt a Bayesian approach that takes a large grid of models –
with varying values of M_BH, Ṁ, a_*, inclination, and reddening
– and maximizes the probability that any given model is a good representation
of the data, while penalizing those models that are not consistent with the
priors, which we establish for M_BH and Ṁ
<cit.>. This routine calculates a χ^2 value for
each model, using continuum windows along the observed SED.
For the prior on M_BH, the reverberation mapping technique has been used to
obtain M_BH for nearby AGN <cit.>, and these results
have been extended to other AGN, using the width of the broad emission lines
and the continuum luminosity
<cit.>. A prior on
Ṁ can be estimated using M_BH and a measurement of the continuum
luminosity at longer (i.e., optical or near-infrared) wavelengths, assuming
the canonical power law, L_ν∝ν^1/3
<cit.>.
For the disk inclination, the only constraint we have is that our sample
contains type-1 AGN, so we can consider only inclinations where
cos θ > 0.5. For intrinsic reddening, to limit the number of free
parameters, we use a simple power-law curve, where
A(λ)=A_oλ^-1 mag. We consider values of A_V ranging from
0.0 to 0.50 mag.
§ H1821+643
H1821+643 is a brightest cluster galaxy (BCG) hosting a luminous AGN at
z∼0.297. There are no direct reverberation mapping measurements for
H1821+643, but there have been several attempts to obtain M_BH via other
methods. These estimates range from ∼1.2 to 6 × 10^9
(; R14), and there are theoretical arguments that
the mass could be as high as 3 × 10^10 <cit.>. We
adopt the most recent `single-epoch' measurement using the Hβ emission
line, M_BH = 2.5 × 10^9, from <cit.>, and we use
their measurement of
log λ L_λ(5100Å) = 46.1 ergs s^-1 for
calculating Ṁ. We adopt errors of 0.3 and 0.2 dex, respectively, for
M_BH and Ṁ <cit.>.
c|cc|cccc|c
Model Parameters and Results
Object log M_BH^obs log Ṁ^obs L/L_Edd cos θ A_V a_*^CF a_*^ref
() () (mag)
H1821+643 9.4 0.48 0.14^+1.8_-0.11 0.85^+0.15_-0.09 0.12^+0.15_-0.12 0.5^+0.5_-0.4 ≥0.40a
NGC 3783 7.47 -1.9 0.020^+0.096_-0.014 0.89^+0.11_-0.09 0.17^+0.11_-0.09 0.2^+0.7_-0.9 ≥0.88b
NGC 3783c 7.47 -1.9 0.032^+0.15_-0.018 0.90^+0.10_-0.09 0.09^+0.09_-0.06 0.5^+0.5_-0.4
aR14
bB11
cCF with disc wind
The best-fit (i.e., the most probable) model is presented in
Fig. <ref>, and the full results are shown as probability contours in
Fig. <ref>. From Fig. <ref>, it is clear that there is a
strong preference for a large, positive spin parameter.
In their analysis of the X-ray spectrum of H1821+643, R14 obtain both a
constraint on the spin parameter and a constraint on L/L_Edd and the
inclination. Applying these constraints to our CF routine, we obtain a similar
probability distribution along the spin parameter axis as we did originally
without these constraints.
§ NGC 3783
NGC 3783 is a well-studied Seyfert 1, SBa galaxy at z∼0.009. The
reverberation mapping technique has been applied to NGC 3783, giving
M_BH = 2.98 ± 0.54 × 10^7 , with a corresponding continuum
luminosity,
log λ L_λ(5100Å) = 43.26 ± 0.04 ergs s^-1,
which we use to estimate Ṁ.
Because NGC 3783 is in a lower M_BH regime than H1821+643, the peak of the
AD emission is in the extreme UV, a regime where we generally lack
observations. We can discriminate between different spin parameters only in the
soft X-ray, where models with the highest spin parameters peak for lower-mass
BHs.
NGC 3783 has a complex X-ray spectrum, with warm absorbers and a soft excess
that appears and disappears <cit.>. We use ROSAT X-ray data (see
<ref>), in addition to the FOS data, to apply the CF method to
NGC 3783. We use the 1992 July 23 ROSAT observation, in particular, because it
is nearly contemporaneous with the FOS observation, and it extends to slightly
lower energy (down to 0.1 keV) than more recent X-ray observations with Chandra
or XMM Newton. T93 fits the ROSAT data with several different power-laws based
on different absorption models, ranging from a simple power-law model with
Γ = 2.22 to a warm absorber model with Γ = 2.77^+0.45_-0.31
(which is similar to the value found by of
Γ = 2.7 ± 0.7). T93 also present a model with Γ∼ 4.7, which
is much higher than other values in the literature, so we do not include it in
our analysis.
§.§ Applying the CF Method with an X-ray Upper-limit
A difficulty with using the X-ray spectrum of an AGN for the CF method is that
there is a known power-law component at X-ray wavelengths of unknown origin, in
addition to possible emission from the AD. Hence, the X-ray data provides only
an upper limit on the AD emission.
We therefore first alter our CF method to search through our model parameter
space for the models with the highest spin parameter that give both a
satisfactory fit to the FOS spectrum (χ^2 ≤ 3) and do not exceed the
X-ray flux at 0.1 keV from the power-law fits to the ROSAT data. To be
conservative in our upper limit, we adopt the warm absorber model power law
(Γ = 2.77^+0.45_-0.31) from T93.
We find models spanning the full range in spin parameter, including maximum
spin, that can fit within the upper limit from the T93 warm absorber model
power-law for the ROSAT data, as long as M_BH is at least as high as the
<cit.> M_BH estimate (see, for example, the purple curve in
Fig. <ref>).
§.§ Applying the CF Method with a Modified X-ray Flux
Even for a maximally spinning black hole, the thin AD emission does not
directly contribute to the hard X-ray band (i.e., above 2 keV; see
Fig. <ref>). Hard X-ray observations of NGC 3783 give a less steep
power law than in the soft X-ray band. For example, T93 find
Γ = 2.14^+0.24_-0.26 when applying their warm absorber model to data
from EXOSAT. If we assume that the excess emission indicated by a steeper
powerlaw in the soft X-ray band is due to AD emission, we can subtract the hard
X-ray powerlaw from the soft powerlaw at 0.1 keV to determine a continuum point
for our regular Bayesian CF procedure.
We use a value of 0.1 dex for the error on M_BH from <cit.>.
The results of the CF routine are presented in the left panel of
Fig. <ref>, and we find a median a_* ≃ 0.2^+0.7_-0.9.
Using the X-ray reflection method, <cit.>
determine a spin parameter a_* ≥ 0.98 at 90% confidence and a_* ≥ 0.88
at 99% confidence (indicated by horizontal dotted and solid lines in
Fig. <ref>).
§.§ Applying the CF Method with an AD Wind
NGC 3783 is known to have a warm absorber in its X-ray spectrum (T93), i.e. an
outflow often presumed to originate from the AD of the AGN
<cit.>. If this is the case for NGC 3783, then the thin AD
model must be modified, as the accretion rate would be reduced throughout the
disk as material is ejected.
The <cit.> thin AD code provides the option of adding a disk wind to
the model. We therefore rerun the CF routine using a model with a self-similar
disk wind, where the mass outflow rate per decade of radius is constant. The
mass outflow rate for NGC 3783 has been estimated to be ≳160 times the
accretion rate; however, much of this outflowing gas may come from beyond the
accretion disk <cit.>. In the absence of an empirical
estimate of the mass outflow rate from the disk itself, we illustrate the
affect of a massive disk wind by choosing a mass accretion rate at the outer
part of the disc equal to three times the accretion rate at the innermost
stable circular orbit (ISCO). The results are presented in the right panel of
Fig. <ref>.
The main difference between these results and the results without the disk wind
is that lower spin parameters (a_* < 0) are much less probable in the disk
wind scenario. This arises because the disk wind reduces the accretion rate in
the inner part of the disk and thus suppresses the luminosity at short
wavelengths. Furthermore, while there is a high probability of a_* ≥ 0.88
both with and without a disk wind, there is clearly a lower probability of
having a_* ≥ 0.98 if there is no disk wind (there is a factor of ∼1.6
difference in radiative efficiency between these two spin parameters). There is
also a positive correlation between the amount of intrinsic reddening and
a_*, with a_* ≥ 0.88 ruled out if there is close to zero reddening.
§ DISCUSSION
Our aim in this work is to compare the derived spin parameters for the X-ray
reflection and CF techniques for two “case study” AGN. Table <ref>
summarizes the results of the two methods, including values for L/L_Edd,
the disk inclination (θ), and instrinsic reddening, as derived from the
CF method.
For H1821+643, a bright AGN with M_BH∼ 2.5 × 10^9, R14 found
a_* ≳ 0.4 using the reflection method. For the CF analysis, the HST FOS
spectrum alone is sufficient, and while we do not obtain a very precise
constraint on a_*, we find a strong probability of a spin parameter that
exceeds the lower limit from R14, giving consistent results between the
reflection and the CF method. We emphasize here that R14 do not clearly detect
an Fe line, but instead fit excess continuum emission in the soft X-ray. For
some AGN, physical processes other than relativistic reflection are the more
likely cause of this soft excess <cit.>,
making this reflection spin measurement a tentative one (see also 1). For the
CF method, from the posterior probability distribution, it is clear that if
M_BH is higher, then a_* would be constrained to the highest allowed
values. Whereas, if M_BH is any lower, we would be unable to obtain a
meaningful constraint on a_*.
For NGC 3783, the FOS spectrum lies along the power-law portion of the thin AD
model spectrum, and only in the soft X-ray regime can models with different
a_* be distinguished. Fortunately, there is nearly contemporaneous FOS and
ROSAT data for NGC 3783. However, the X-ray data includes the known X-ray
power-law emission that likely originates from above the AD (often called the
“corona”). Using the X-ray flux as an upper-limit, we find that as long as
M_BH is at least as high as the observed M_BH, any spin parameter could
fit the data.
Since there are other components besides the AD emission in the X-ray, if we
assume that just the excess emission indicated by the steeper powerlaw slope in
the soft X-ray, compared to in the hard X-ray, is due to the AD itself,
applying the CF method to NGC 3783 gives a high probability for a high spin
parameter, consistent with the 99% confidence lower limit from relativistic
reflection (B11). However, there is a low probability of a_* exceeding the
90% confidence lower limit from B11, unless we include a disk wind in the AD
model.
The results of the CF method are, in general, consistent with the results of
the reflection method for the two AGN studied here. In particular, the
agreement is improved for NGC 3783 if we assume a disc wind, which we include
based on the existence of a warm absorber in the X-ray spectrum. The disk wind
analysis, however, is tentative because it is unknown how much, if any, of the
outflow originates from the inner part of the disk
<cit.>. If the outflow originates further out and
therefore does not suppress the short wavelength thin AD emission, there is a
slight tension between the two methods, as the reflection method suggests a
slightly higher spin parameter than the CF method without a disk wind. We also
find that, without a disc wind, the highest spins are most probable for A_V
between 0.2 and 0.3 mag. While these reddening values are generally consistent
with the constraints from broad emmission line measurements for NGC 3783
<cit.>, if the reddening is actually closer
to 0.1 mag, then there is even greater tension between the reflection and
CF results for a_*. <cit.> and <cit.> similarly find that the
CF method suggests lower spin parameters for narrow-line Seyfert 1s than the
nearly maximal spin typically found for this AGN subclass via X-ray reflection.
Our study highlights one particular strength of the X-ray reflection method for
nearby Seyfert galaxies. For NGC 3783, with a BH mass of ∼10^7 ,
the inability to probe the extreme UV prevents us from obtaining a very precise
estimate of a_*. However, we point out that recent work by <cit.>
casts doubt on whether the Fe Kα line gives any information on a_* for
one of the AGN in the reflection sample (see also 1).
Nearly half (12) of the 25 AGN with spin measurements from the reflection
method have a_* > 0.9 <cit.>. Given that the CF method suggests
lower spin for two cases with near-maximal reflection spin estimates
(1H 0707-495 in and NGC 3783 presented here), these high-spin
cases would be good candidates for further comparisons between the reflection
and CF methods, especially those with even lower M_BH than NGC 3783, whose
AD SEDs would peak further into the soft X-ray. There is also a new method
proposed by <cit.>, based on microlensing, that could be included in
future comparisons of spin estimation techniques.
As more and better X-ray measurements allow the reflection sample to grow and
as better constraints on M_BH <cit.> allow the
CF method to more precisely determine a_* for larger samples of AGN, there
will be a larger population where both methods can be properly applied and
compared. If such comparisons yield good agreement, then each method can be more
confidently applied to the samples they are best suited for – nearby Seyferts
for the X-ray reflection method and higher redshift quasars for the CF method.
If instead, these comparisons bring further tensions to light, then the
assumptions underlying these methods may need to be revisited.
We thank the referee for helpful feedback.
We thank Paulina Lira, Julie Hlavacek-Larrondo, and Helen Russell for useful
discussion.
DMC and DH acknowledge support from a Natural Sciences and Engineering
Research Council of Canada Discovery Grant and a Fonds de recherche du
Québec — Nature et Technologies Nouveaux Chercheurs Grant.
GWF acknowledges support from Université Paris-Saclay's IDEX program and
l'Office Franco-Québécois pour la Jeunesse.
[Alloin et al.(1995)Alloin, Santos-Lleo, Peterson,
Wamsteker, Altieri, Brinkmann, Clavel, Crenshaw, George, Glass,
Johnson, Kriss, Malkan, Polidan, Reichert, Rodriguez-Pascual,
Romanishin, Starr, Stirpe, Taylor, Turner, Vega, Winge, &
Wood]Alloin95
Alloin, D., Santos-Lleo, M., Peterson, B. M., et al. 1995, , 293
[Bentz et al.(2009)Bentz, Walsh, Barth, & et
al.]Bentz09
Bentz, M. C., Walsh, J. L., Barth, A. J., & et al. 2009, , 705,
199
[Boissay et al.(2016)]Boissay16 Boissay, R., Ricci, C., & Paltani,
S. 2016, , 588, A70
[Brenneman(2013)]Brenneman13 Brenneman, L. 2013, Measuring the Angular Momentum of Supermassive Black Holes, SpringerBriefs in Astronomy. ISBN 978-1-4614-7770-9. Laura Brenneman, 2013,
[Brenneman et al.(2011)Brenneman, Reynolds, Nowak, Reis,
Trippe, Fabian, Iwasawa, Lee, Miller, Mushotzky, Nandra, &
Volonteri]Brenneman11
Brenneman, L. W., Reynolds, C. S., Nowak, M. A., et al. 2011, ,
736, 103
[Capellupo et al.(2015)Capellupo, Netzer, Lira,
Trakhtenbrot, & Mejía-Restrepo]Capellupo15
Capellupo, D. M., Netzer, H., Lira, P., Trakhtenbrot, B., &
Mejía-Restrepo, J. 2015, , 446, 3427
[Capellupo et al.(2016)Capellupo, Netzer, Lira,
Trakhtenbrot, & Mejía-Restrepo]Capellupo16
—. 2016, , 460, 212
[Cardelli et al.(1989)Cardelli, Clayton, &
Mathis]Cardelli89
Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, , 345, 245
[Chartas et al.(2016)]Chartas16 Chartas, G., Krawczynski,
H., Zalesky, L., et al. 2016, arXiv:1609.09490
[Collin et al.(2002)Collin, Boisson, Mouchet, Dumont,
Coupé, Porquet, & Rokaki]Collin02
Collin, S., Boisson, C., Mouchet, M., et al. 2002, , 388, 771
[Crenshaw & Kraemer(2012)]Crenshaw12 Crenshaw, D. M., & Kraemer,
S. B. 2012, , 753, 75
[Dasyra et al.(2011)Dasyra, Ho, Netzer, Combes,
Trakhtenbrot, Sturm, Armus, & Elbaz]Dasyra11
Dasyra, K. M., Ho, L. C., Netzer, H., et al. 2011, , 740, 94
[Davis & Laor(2011)]Davis11
Davis, S. W., & Laor, A. 2011, , 728, 98
[de La Calle Pérez et al.(2010)de La Calle Pérez,
Longinotti, Guainazzi, Bianchi, Dovčiak, Cappi, Matt,
Miniutti, Petrucci, Piconcelli, Ponti, Porquet, &
Santos-Lleó]deLaCallePerez10
de La Calle Pérez, I., Longinotti, A. L., Guainazzi, M., et al.
2010, , 524, A50
[Decarli et al.(2008)Decarli, Labita, Treves, &
Falomo]Decarli08
Decarli, R., Labita, M., Treves, A., & Falomo, R. 2008, , 387,
1237
[Done & Jin(2016)]Done16 Done, C., & Jin, C. 2016, , 460,
1716
[Done et al.(2013)]Done13 Done, C., Jin, C., Middleton,
M., & Ward, M. 2013, , 434, 1955
[Evans & Koratkar(2004)]Evans04
Evans, I. N., & Koratkar, A. P. 2004, , 150, 73
[Farrah et al.(2002)Farrah, Serjeant, Efstathiou,
Rowan-Robinson, & Verma]Farrah02
Farrah, D., Serjeant, S., Efstathiou, A., Rowan-Robinson, M., &
Verma, A. 2002, , 335, 1163
[Hubeny et al.(2001)Hubeny, Blaes, Krolik, &
Agol]Hubeny01
Hubeny, I., Blaes, O., Krolik, J. H., & Agol, E. 2001, , 559, 680
[Mehdipour et al.(2011)]Mehdipour11 Mehdipour, M.,
Branduardi-Raymont, G., Kaastra, J. S., et al. 2011, , 534, A39
[Mejía-Restrepo et al.(2016)Mejía-Restrepo,
Trakhtenbrot, Lira, Netzer, & Capellupo]Mejia16
Mejía-Restrepo, J. E., Trakhtenbrot, B., Lira, P., Netzer, H.,
& Capellupo, D. M. 2016, , 460, 187
[Netzer & Trakhtenbrot(2014)]Netzer14
Netzer, H., & Trakhtenbrot, B. 2014, , 438, 672
[Netzer et al.(2003)]Netzer03 Netzer, H., Kaspi, S., Behar, E.,
et al. 2003, , 599, 933
[Peterson et al.(2004)Peterson, Ferrarese, Gilbert, Kaspi,
Malkan, Maoz, Merritt, Netzer, Onken, Pogge, Vestergaard, &
Wandel]Peterson04
Peterson, B. M., Ferrarese, L., Gilbert, K. M., et al. 2004, , 613,
682
[Reichert et al.(1994)Reichert, Rodriguez-Pascual, Alloin,
Clavel, Crenshaw, Kriss, Krolik, Malkan, Netzer, Peterson,
Wamsteker, Altamore, Altieri, Anderson, Blackwell, Boisson,
Brosch, Carone, Dietrich, England, Evans, Filippenko, Gaskell,
Goad, Gondhalekar, Horne, Kazanas, Kollatschny, Koratkar,
Korista, MacAlpine, Maoz, Mazeh, McCollum, Miller, Mendes de
Oliveira, O'Brien, Pastoriza, Pelat, Perez, Perola, Pogge,
Ptak, Recondo-Gonzalez, Rodriguez-Espinosa, Rosenblatt, Sadun,
Santos-Lleo, Shields, Shrader, Shull, Simkin, Sitko, Snijders,
Sparke, Stirpe, Stoner, Storchi-Bergmann, Sun, Wang, Welsh,
White, Winge, & Zheng]Reichert94
Reichert, G. A., Rodriguez-Pascual, P. M., Alloin, D., et al. 1994,
, 425, 582
[Reis et al.(2014)Reis, Reynolds, Miller, &
Walton]Reis14
Reis, R. C., Reynolds, M. T., Miller, J. M., & Walton, D. J. 2014,
, 507, 207
[Reynolds(2014)]Reynolds14b Reynolds, C. S. 2014, , 183, 277
[Reynolds et al.(2014)Reynolds, Lohfink, Babul, Fabian,
Hlavacek-Larrondo, Russell, & Walker]Reynolds14a
Reynolds, C. S., Lohfink, A. M., Babul, A., et al. 2014, , 792,
L41
[Schartel et al.(1997)Schartel, Schmidt, Fink, Hasinger,
& Truemper]Schartel97
Schartel, N., Schmidt, M., Fink, H. H., Hasinger, G., & Truemper, J.
1997, , 320, 696
[Schlafly & Finkbeiner(2011)]Schlafly11
Schlafly, E. F., & Finkbeiner, D. P. 2011, , 737, 103
[Schlegel et al.(1998)Schlegel, Finkbeiner, &
Davis]Schlegel98
Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, , 500, 525
[Schnorr-Müller et al.(2016)]SchnorrM16 Schnorr-Müller, A.,
Davies, R. I., Korista, K. T., et al. 2016, , 462, 3570
[Shakura & Sunyaev(1973)]Shakura73
Shakura, N. I., & Sunyaev, R. A. 1973, , 24, 337
[Shen et al.(2015)]Shen15 Shen, Y., Brandt, W. N., Dawson, K. S.,
et al. 2015, , 216, 4
[Slone & Netzer(2012)]Slone12
Slone, O., & Netzer, H. 2012, , 426, 656
[Tombesi et al.(2013)Tombesi, Cappi, Reeves, Nemmen,
Braito, Gaspari, & Reynolds]Tombesi13
Tombesi, F., Cappi, M., Reeves, J. N., et al. 2013, , 430, 1102
[Turner et al.(1993)Turner, Nandra, George, Fabian, &
Pounds]Turner93
Turner, T. J., Nandra, K., George, I. M., Fabian, A. C., & Pounds,
K. A. 1993, , 419, 127
[Vasudevan et al.(2016)Vasudevan, Fabian, Reynolds, Aird,
Dauser, & Gallo]Vasudevan16
Vasudevan, R. V., Fabian, A. C., Reynolds, C. S., et al. 2016, ,
458, 2012
[Walker et al.(2014)Walker, Fabian, Russell, &
Sanders]Walker14
Walker, S. A., Fabian, A. C., Russell, H. R., & Sanders, J. S. 2014,
, 442, 2809
[Yaqoob et al.(2016)]Yaqoob16 Yaqoob, T., Turner, T. J.,
Tatum, M. M., Trevor, M., & Scholtes, A. 2016, , 462, 4038
|
http://arxiv.org/abs/1701.07943v3 | 20170127044310 | Low-energy excitation spectra in the excitonic phase of cobalt oxides | [
"Tomoki Yamaguchi",
"Koudai Sugimoto",
"Yukinori Ohta"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
Coherent Microwave-to-Optical Conversion via Six-Wave Mixing in Rydberg Atoms
Wenhui Li^1,4
Received: date / Accepted: date
=============================================================================
The Bose–Einstein condensation of fermion pairs is one of the most intriguing phenomena
in condensed matter physics. The excitonic phase (EP) is representative of such a pair
condensation,<cit.>
where holes in valence bands and electrons in conduction bands spontaneously form pairs
owing to attractive Coulomb interaction. After Mott's prediction of the EP half a century
ago,<cit.> a number of candidate materials for this phase have come to
our attention. Among them are the transition-metal chalcogenides
1T-TiSe_2<cit.> and
Ta_2NiSe_5,<cit.> where the electrons and
holes on different atoms are considered to form spin-singlet pairs to condense into the EP, which is
accompanied by lattice distortion.<cit.>
Another class of materials includes the perovskite cobalt oxides,<cit.>
where the valence-band holes and conduction-band electrons form spin-triplet pairs in different
orbitals on the same atoms. Pr_0.5Ca_0.5CoO_3 (PCCO) is an example in which the
“metal-insulator” phase transition is observed at T_c≃ 80 K, which is associated with
a sharp peak in the temperature dependence of the specific heat and a drop in the magnetic
susceptibility below T_c,<cit.> together with a valence transition of Pr
ions.<cit.>
Some results of experiments indicate that the resistivity is in fact small and nearly temperature
independent below T_c,<cit.> suggesting that the bands may not be fully
gapped. Note that no local magnetic moments are observed, but the exchange splitting of the
Pr^4+ Kramers doublet occurs,<cit.> the result of which may therefore be
termed as a hidden order, and also that no clear signatures of the spin-state transition are
observed in the X-ray absorption spectra.<cit.>
Kuneš and Augustinsky̌ argued that the anomalies of PCCO can be attributed to the EP
transition,<cit.> whereby they applied the dynamical-mean-field-theory calculation
to the two-orbital Hubbard model defined on a two-dimensional square lattice and claimed that
the anomalous behaviors of the specific heat, dc conductivity, and spin susceptibility can
be explained. They also performed the LDA+U band-structure calculation and showed that
the magnetic multipole ordering occurs in PCCO as a result of the excitonic condensation.
LaCoO_3 under a high magnetic field is another example of the possible realization of the
EP,<cit.> which was substantiated by the theoretical calculations based on the
two-orbital Hubbard and related models in two-dimension.<cit.>
In these materials, cobalt ions are basically in the Co^3+ valence state with a 3d^6
configuration, where the three t_2g orbitals are mostly filled with electrons and the two e_g
orbitals are nearly empty. The low-spin state is thus favorable for the condensation of excitons.
In this work, motivated by the above development in the field, we will study the EP of PCCO
using a realistic Hubbard model, taking into account all five 3d orbitals of Co ions
arranged in the three-dimensional cubic lattice of the perovskite structure.
The noninteracting tight-binding bands are determined from first principles and the electron-electron
interactions in the 3d orbitals are fully taken into account in each Co ion.
We will then study the excitonic fluctuations in the normal state via the calculation of the
excitonic susceptibility in the random phase approximation (RPA) and show that the instability
toward the EP actually occurs in this model. The ground state of this model is then calculated
in the mean-field approximation, whereby we find that the EP with a magnetic multipole order
actually occurs. We will also calculate the dynamical susceptibility of both spin-transverse
and spin-longitudinal modes in the EP to clarify the presence of the gapless Goldstone and gapful
Higgs modes in the excitation spectra. The experimental relevance of our results will be discussed.
The crystal structure of PCCO belongs to the P_nma space group, where the CoO_6
octahedra are rotated and the cubic perovskite structure is distorted with two independent
Co ions in the unit cell,<cit.> giving rise to complexity in the analysis
of the EP in PCCO. We instead make use of the crystal structure of PrCoO_3, which is
a perfect cubic perovskite with the lattice constant a=3.82 Å<cit.>
(hereafter taken as the unit of length). The electronic structure is then calculated from
first principles using the WIEN2k code <cit.>.
The obtained band dispersions are illustrated in Fig. <ref>, where we find that the
bands near the Fermi energy come from the Pr 4f orbitals (giving narrow dispersions)
and Co 3d orbitals (giving wide dispersions). For the latter, we find that the valence
bands come from the t_2g manifold of Co 3d orbitals and the conduction bands
come from the e_g manifold, as indicated in the weight plot. We, moreover, find that
the t_2g bands and e_g bands are orthogonal to each other without hybridization,
providing us with an ideal situation for excitonic condensation.
We also performed the band-structure calculation of PCCO, arranging the Pr and Ca ions
regularly, and confirmed that the bands remain qualitatively unchanged, supporting
the validity of the rigid-band approximation.
Note that PCCO contains both the Co^3+ and Co^4+ ions,<cit.>
while PrCoO_3 contains only the Co^3+ ions and shows no signatures of the phase
transition.<cit.> The change in the valence state of Co ions, which may
lead to a better nesting feature of the Fermi surfaces, seems to play an important role in
the EP transition.
Hereafter, we focus on the 3d bands of the Co ions, and assuming that the 4f bands
of Pr ions act as a bath of electrons and, together with the presence of Ca^2+ ions,
the valence state of Co ions is kept to be in the 3d^6 configuration for
simplicity<cit.> unless otherwise stated.
Let us now set up the Hamiltonian H=H_0+H_ int for the modeling of the 3d electrons
of PCCO. The kinetic energy term H_0 is defined in the tight-binding approximation as
H_0 = ∑_i, μ, σϵ_μ c^†_i,μ,σ c_j,μ,σ
+ ∑_i, j∑_μ,ν∑_σ t_ij,μν c^†_i,μ,σ c_j,ν,σ,
where c^†_i,μ,σ is the creation operator of a spin-σ (=↑,↓)
electron on the orbital μ at site i, ϵ_μ is the on-site energy of orbital
μ, and t_ij, μν is the hopping integral between the orbital ν at site j
and the orbital μ at site i. The orbitals μ and ν are labeled as 1 (d_xy),
2 (d_yz), 3 (d_zx), 4 (d_x^2-y^2), and 5 (d_3z^2 - r^2).
The 12 molecular orbitals for the 3d and 4f bands are obtained as the maximally localized
Wannier functions,<cit.> thereby retaining only the 3d bands
to determine the on-site energies and hopping integrals (up to 6th neighbors).
The tight-binding band dispersions thus calculated reproduce the first-principles band structure
well, as shown in Fig. <ref>.
The on-site interaction term is defined as
H_ int = U2∑_i,μ,σ
c^†_iμσ c_iμσ c^†_iμ-σc_iμ-σ
+ U'2∑_i,σ,σ'∑_μ≠ν
c^†_i,μ,σ c_i,μ,σ c^†_i,ν,σ' c_i,ν,σ'
- J2∑_i,σ,σ'∑_μ≠ν
c^†_i,μ,σ c_i,μ,σ' c^†_i,ν,σ' c_i,ν,σ
+ J'2∑_i,σ∑_μ≠ν
c^†_i,μ,σ c_i,ν,-σ c^†_i,μ,-σ c_i,ν,σ,
where U, U', J, and J' are the intraorbital Coulomb interaction, interorbital Coulomb interaction,
Hund's rule coupling, and pair-hopping interaction, respectively.
We assume the atomic-limit relations U' = U - 2J and J' = J for the interaction strengths,
and we fix the ratio J/U at 0.1 in the present calculations.
We apply the mean-field approximation to the interaction terms. We assume the spin-triplet
excitonic order in the presence of Hund's rule coupling <cit.> and write
the order parameters as
Δ_μ,ν
= ∑_k, σσ⟨ c^†_k+Q, μ, σ c_k, ν, σ⟩ ,
where c_k, μ, σ is the Fourier component of c_i,μ,σ with the wave
vector k, and Q is an ordering vector.
Note that when μ (ν) is one of the e_g orbitals, ν (μ) is one of the t_2g orbitals.
All the terms irrelevant to this excitonic ordering are neglected for simplicity.
We thus obtain the diagonalized mean-field Hamiltonian,
H^ MF
= ∑_k_0, ϵ, σ E_k_0, ϵ, σγ^†_k_0, ϵ, σγ_k_0, ϵ, σ ,
where γ_k_0, ϵ, σ is the canonical transformation of c_k, μ, σ
satisfying
c_k, μ, σ = ∑_ϵψ_μ, m; ϵ (k_0, σ) γ_k_0, ϵ, σ
and ϵ is the band index.
Since the excitonic order enlarges the unit cell, we write the wave vector as k = k_0 + m Q,
where k_0 is the wave vector in the reduced Brillouin zone and m is an integer.
We carry out the summation with respect to k_0 using the 50 × 50 × 50 meshes
in the reduced Brillouin zone.
We define the dynamical susceptibility as
χ_λμ
κν^s s' (q, q', ω)
= i/N∑_k, k'∫^∞_0 d t e^i ω t
×⟨ [ c_k, κ, σ_1^†(t) c_k + q, λ, σ_2 (t),
c_k' + q', μ, σ'_1^† c_k', ν, σ'_2 ] ⟩,
where N is the number of k points used,
c_k, μ, σ(t) is the Heisenberg representation of c_k, μ, σ,
and s denotes a spin pair (σ_1, σ_2), taking the values
↑, ↓, +, and - for (↑, ↑), (↓, ↓),
(↑, ↓), and (↓, ↑), respectively.
We write Eq. (<ref>) as χ(q,ω) when q=q'.
The bare susceptibility is given by
χ_0^ss'_λμ
κν (q, q+lQ, ω)
= -1/N∑_p_0, m, n, ϵ, ϵ' f(E_p_0+q, ϵ, σ_1) - f(E_p_0, ϵ', σ_2)/ E_p_0+q, ϵ, σ_1 - E_p_0, ϵ', σ_2 - ( ω + i η)
×ψ_λ, m ; ϵ (p_0 + q, σ_1)
ψ_μ,m + n + l ; ϵ^∗ (p_0 + q, σ_2')
×ψ_κ,m ; ϵ'^∗ (p_0, σ_2)
ψ_ν, m+n ; ϵ' (p_0, σ_1') δ_σ_1, σ_2'δ_σ_1', σ_2,
where the summation with respect to p_0 runs over the reduced Brillouin zone.
We set η = 0.01 eV.
We calculate the dynamical susceptibilities in the multiorbital RPA, given by
[ χ^+-; χ^↑↑; χ^↓↑ ]
=
[ χ^+-_0; χ^↑↑_0; 0 ]
+
[ χ^+-_0 V^-+ 0 0; 0 χ^↑↑_0 V^↑↑ χ^↑↑_0 V^↑↓; 0 χ^↓↓_0 V^↓↑ χ^↓↓_0 V^↓↓ ][ χ^+-; χ^↑↑; χ^↓↑ ],
where the matrix product in the orbital basis is given as
[χ_0 V χ ]_λμ
κν (q,ω) =
∑_κ', λ', μ', ν', mχ_0_λμ'
κν' (q,q+mQ,ω)
V_μ' λ'
ν' κ'χ_λ' μ
κ' ν (q+mQ,q,ω)
with the interaction matrix V listed in Table <ref>.
First, let us discuss the spin-triplet excitonic fluctuations in the normal phase.
Figure <ref> shows the q dependence of the static susceptibility
of the excitonic spin-transverse mode χ^+-_μμ
νν (q, ω = 0)
calculated in the normal phase, where μ (ν) is one of the e_g (t_2g) orbitals.
We find that, at q = (π,π,π), the diverging fluctuations with increasing
U toward 1.15 eV are observed for all the orbital components except (μ, ν) = (5,1).
This instability toward the EP is caused by the Fermi-surface nesting between the electron
pockets of the e_g bands located around the Γ point of the Brillouin zone and the
hole pockets of the t_2g bands located around the R(π,π,π) point of the
Brillouin zone (see Fig. <ref>). Thus, the EP transition with the ordering vector
Q = (π,π,π) occurs at the critical value U_cr=1.15 eV.
Next, let us solve the mean-field equations to calculate the excitonic order parameter with
Q = (π, π, π). The obtained orbital components Δ_μ,ν are shown in
Fig. <ref>(a), where we find that all the components Δ_μ,ν (except Δ_1,5)
become finite above U_cr=1.15 eV. As U increases, U' and J
also increase, which enhances Δ_μ,ν. Because the excitons are formed
in a single atom, the excitonic spin polarization leads to the magnetic multipole order
in real space,<cit.> as shown in Fig. <ref>(b).
The orbital components of the magnetic multipoles formed between the μ and ν
orbitals (indicated as μ⊗ν) are shown in Fig. <ref>(c).
Reflecting the symmetry of the orbitals, the components of the order parameter satisfy the
relations Δ_4,2=Δ_4,3=Δ_4,1/√(3), Δ_5,2=-Δ_5,3=Δ_4,1/2,
and Δ_5,1 = 0, where different combinations of the signs are also possible.
The last relation indicates that the electrons on the d_3z^2 - r^2 orbital and holes
on the d_xy orbital do not form pairs. This is consistent with the result for the
calculated excitonic susceptibility in the normal phase [see Fig. <ref>(b)],
where no diverging behavior is observed.
We note that the bands in the EP are not fully gapped at U<1.8 eV, keeping the system
metallic with small Fermi surfaces, which may be consistent with the results of
experiment.<cit.> A full gap opens for larger values of U>1.8 eV.
We also note that the excitonic order remains finite against the change in the filling of
electrons, e.g., between 5.4 and 6.1 per site at U=1.2 eV, which is also consistent with
experimental results.<cit.>
Next, let us discuss the excitation spectra in the EP. The calculated excitonic spin-transverse
dynamical susceptibilities Im χ^+-_μμ
νν are
shown in Figs. <ref>(a)–<ref>(d), where we find the gapless Goldstone
mode at q = (π,π,π) for all the components except for (μ, ν) = (5,1)
[see Fig. <ref>(c)], reflecting the presence/absence of Δ_μ,ν.
The velocities of the collective excitations near q = (π,π,π)
are the same for all the components.
Unlike in the well-known collective mode of the Heisenberg antiferromagnets,
the excitonic collective mode does not extend to reach the point q = 0 and ω = 0.
The gapless collective-mode behavior obtained is consistent with the results of the two-orbital
models.<cit.>
Note that the excitations around q = (π/2,π/2,π/2) appearing in all the components
originate from the particle-hole excitations.
The calculated excitonic spin-longitudinal dynamical susceptibilities
Im χ^zz_μμ
νν
defined as χ^zz_λμ
κν=
∑_σ,σ'σσ'χ^σσ'_λμ
κν
are also shown in Figs. <ref>(f)–<ref>(i), where we find the gapful
Higgs mode. The broad excitations that have a gap at q = (π,π,π) are clearly seen,
except for (μ, ν) = (5,1) [see Fig. <ref>(h)], where only the particle-hole
excitations are present. The spectra around q = (π/2,π/2,π/2) appearing in all
the components are again particle-hole excitations.
The orbital-diagonal part of the dynamical susceptibilities in the transverse mode,
∑_μ,νIm χ^+-_μν
μν,
and in the longitudinal mode,
∑_μ,νIm χ^zz_μν
μν,
are also shown for comparison in Figs. <ref>(e) and <ref>(j),
respectively, the results of which reflect the metallic nature of the system.
Finally, let us discuss the possible experimental relevance of our results.
Inelastic neutron scattering may be a possible experimental technique for observing
the excitations in the spin degrees of freedom, but may enable one to detect
only the orbital-diagonal components of the dynamical susceptibility, the intensity of
which is in proportion to
∑_μ, νIm χ_μν
μν^ss'.
Our corresponding results, which come from the particle-hole transitions, are shown
in Figs. <ref>(e) and <ref>(j).
Because the dynamical susceptibility related to the orbital-off-diagonal excitonic ordering
should contain the vertex-nonconserved terms, defined as the terms with κ≠λ
and μ≠ν in Eq. (<ref>), we need to seek other quantum-beam sources
that have the ability to change the orbitals (or orbital angular momentum) in the inelastic
scattering processes. To this end, future experimental developments are desired.
In summary, we derived the effective five-orbital Hubbard model defined on the
three-dimensional cubic lattice from first principles to describe the electronic states
of Pr_0.5Ca_0.5CoO_3 with the cubic perovskite structure.
Then, we calculated the static susceptibility of the excitonic spin-transverse mode
in the normal phase using the RPA and found that the diverging excitonic fluctuations
occur at Q = (π,π,π). We calculated the excitonic ground state in the
mean-field approximation and found that the magnetic multipole order occurs.
We also calculated the dynamical susceptibility in the EP to study the excitation spectra
and found that there appear gapless collective excitations in the excitonic spin-transverse
mode and gapful collective excitations in the excitonic spin-longitudinal mode.
We thank T. Kaneko and S. Miyakoshi for fruitful discussions.
This work was supported in part by Grants-in-Aid for Scientific Research from
the Japan Society for the Promotion of Science (Nos. 26400349 and 15H06093).
The numerical calculations were carried out on computers at Yukawa Institute for
Theoretical Physics, Kyoto University, Japan, and Research Center for Computational
Science, Okazaki, Japan.
9
Jerome1967PR D. Jérome, T. M. Rice, and W. Kohn, Phys. Rev. 158, 462 (1967).
Halperin1968RMP B. I. Halperin and T. M. Rice, Rev. Mod. Phys. 40, 755 (1968).
Littlewood2004JPCM P. B. Littlewood, P. R. Eastham, J. M. J. Keeling, F. M. Marchetti, B. D. Simons, and M. H. Szymanska, J. Phys.: Condens. Matter 16, S3597 (2004).
Kunes2015JPCM J. Kuneš, J. Phys.: Condens. Matter 27, 333201 (2015).
Mott1961PM N. F. Mott, Philos. Mag. 6, 287 (1961).
DiSalvo1976PRB F. J. Di Salvo, D. E. Moncton, and J. V. Waszczak, Phys. Rev. B 14, 4321 (1976).
Cercellier2007PRL H. Cercellier, C. Monney, F. Clerc, C. Battaglia, L. Despont, M. G. Garnier, H. Beck, P. Aebi, L. Patthey, H. Berger, and L. Forró, Phys. Rev. Lett. 99, 146403 (2007).
Monney2009PRL C. Monney, H. Cercellier, F. Clerc, C. Battaglia, E. F. Schwier, C. Didiot, M. G. Garnier, H. Beck, P. Aebi, H. Berger, L. Forró, and L. Patthey, Phys. Rev. B 79, 45116 (2009).
Wakisaka2009PRL Y. Wakisaka, T. Sudayama, K. Takubo, T. Mizokawa, M. Arita, H. Namatame, M. Taniguchi, N. Katayama, M. Nohara, and H. Takagi, Phys. Rev. Lett. 103, 26402 (2009).
Kaneko2013PRB T. Kaneko, T. Toriyama, T. Konishi, and Y. Ohta, Phys. Rev. B 87, 35121 (2013).
Seki2014PRB K. Seki, Y. Wakisaka, T. Kaneko, T. Toriyama, T. Konishi, T. Sudayama, N. L. Saini, M. Arita, H. Namatame, M. Taniguchi, N. Katayama, M. Nohara, H. Takagi, T. Mizokawa, and Y. Ohta, Phys. Rev. B 90, 155116 (2014).
Kaneko2015PRB T. Kaneko, B. Zenker, H. Fehske, and Y. Ohta, Phys. Rev. B 92, 115106 (2015).
Kunes2014PRB J. Kuneš and P. Augustinsky̌, Phys. Rev. B 90, 235112 (2014).
Nasu2016PRB J. Nasu, T. Watanabe, M. Naka, and S. Ishihara, Phys. Rev. B 93, 205136 (2016).
Afonso2016 J. F. Afonso and J. Kuneš, arXiv:1612.07576.
Tsubouchi2002PRB S. Tsubouchi, T. Kyômen, M. Itoh, P. Ganguly, M. Oguni, Y. Shimojo, Y. Morii, and Y. Ishii, Phys. Rev. B 66, 52418 (2002).
Hejtmanek2010PRB J. Hejtmánek, E. Šantavá, K. Knížek, M. Maryško, Z. Jirák, T. Naito, H. Sakaki, and H. Fujishiro, Phys. Rev. B 82, 165107 (2010).
Garcia-Munoz2011PRB J. L. García-Muñoz, C. Frontera, A. J. Barón-González, S. Valencia, J. Blasco, R. Feyerherm, E. Dudzik, R. Abrudan, and F. Radu, Phys. Rev. B 84, 045104 (2011).
Hejtmanek2013EPJB J. Hejtmánek, Z. Jirák, O. Kaman, K. Knížek, E. Šantavá, K. Nitta, T. Naito, and H. Fujishiro, Eur. Phys. J. B 86, 305 (2013).
Herrero-Martin2012PRB J. Herrero-Martín, J. L. García-Muñoz, K. Kvashnina, E. Gallo, G. Subías, J. A. Alonso, and A. J. Barón-González, Phys. Rev. B 86, 125106 (2012).
Ikeda2016PRB A. Ikeda, T. Nomura, S. Takeyama, Y. H. Matsuda, A. Matsuo, K. Kindo, and K. Sato, Phys. Rev. B 93, 220401(R) (2016).
Sotnikov2016SR A. Sotnikov and J. Kuneš, Sci. Rep. 6, 30510 (2016).
Tatsuno2016JPSJ T. Tatsuno, E. Mizoguchi, J. Nasu, M. Naka, and S. Ishihara, J. Phys. Soc. Jpn. 85, 083706 (2016).
Wei-RanCPL2005 W. Wei-Ran, X. Da-Peng, S. Wen-Hui, D. Zhan-Hui, X. Yan-Feng, and S. Geng-Xin, Chin. Phys. Lett. 22, 2400 (2005).
WIEN2kP. Blaha, K. Schwarz, G. K. H. Madsen, D. Kvasnicka, and J. Luitz, WIEN2K (Technische UniversitätWien, Austria, 2002).
Pandey2008PRB S. K. Pandey, S. Patil, V. R. R. Medicherla, R. S. Singh, and K. Maiti, Phys. Rev. B 77, 115137 (2008).
Kunes2010CPC J. Kuneš, R. Arita, P. Wissgotte, A. Toschie, H. Ikeda, and K. Helde, Comput. Phys. Commun. 181, 1888 (2010).
Mostofi2014CPC A. A. Mostofi, J. R. Yates, G. Pizzi, Y. S. Lee, I. Souza, D. Vanderbilt, and N. Marzari, Comput. Phys. Commun. 185, 2309 (2014)
Kaneko2014PRB T. Kaneko and Y. Ohta, Phys. Rev. B 90, 245144 (2014).
Kaneko2016PRB T. Kaneko and Y. Ohta, Phys. Rev. B 94, 125127 (2016).
Brydon2009PRB P. M. R. Brydon and C. Timm, Phys. Rev. B 80, 174401 (2009).
|
http://arxiv.org/abs/1701.07757v1 | 20170126161506 | Extrapolated Quantum States, Void States, and a Huge Novel Class of Distillable Entangled States | [
"Michel Boyer",
"Aharon Brodutch",
"Tal Mor"
] | quant-ph | [
"quant-ph",
"math-ph",
"math.MP"
] |
Extrapolated Quantum States, Void States, and a Huge Novel
Class of Distillable Entangled States
M. Boyer, A. Brodutch and T. Mor
M. Boyer DIRO, Université de Montréal, Canada,
boyer@iro.umontreal.ca
A. Brodutch Department of Physics & Astronomy and Institute for Quantum Computing, University of Waterloo, Canada,
aharon.brodutch@uwaterloo.ca
T. Mor Computer Science Department, Technion, Israel,
talmo@cs.technion.ac.il
Extrapolated Quantum States, Void States, and a Huge Novel
Class of Distillable Entangled States
Michel Boyer
Aharon Brodutch
Tal Mor
December 30, 2023 [file: ]
================================================================================================
A nice and interesting property of any
pure tensor-product state is that
each such state
has distillable entangled states
at an arbitrarily small distance
ϵ in its neighbourhood.
We say that such nearby states are ϵ-entangled,
and we call the tensor product state in that case,
a “boundary separable state”, as there is entanglement
at any distance from this “boundary”.
Here we find a huge class of separable states that also
share that property mentioned above – they
all have
ϵ-entangled states
at any small distance in their
neighbourhood.
Furthermore, the entanglement they have is proven to be
distillable. We then extend this result to the discordant/classical cut and show that all classical states (correlated and uncorrelated) have discordant states at distance ϵ, and provide a constructive method for finding ϵ-discordant states.
§ INTRODUCTION
Studying the structure of the set of quantum states has been a central topic in quantum information research <cit.>. Particular emphasis is on the tensor product structure of this space in terms of entanglement <cit.> and general correlations <cit.>. The study has lead to the identification of important families of states such as Werner states <cit.>,
bound entangled states <cit.>, and the
W-states <cit.>, as well as interesting sets of bases such as unextendable product bases (UPB) <cit.> and locally indistinguishable bases <cit.>.
One method which has been particularly powerful in the study of state-space is interpolation, i.e.studying the states that lay between two known states with different properties. Interpolation has been used in the study of robustness to various types of noise <cit.> and learning about the ball of separable state <cit.>. The complementary method, extrapolation, has also been used in some cases for example in the study of non-signaling theories <cit.> where trace one Hermitian operators with negative eigenvalues were required for larger than quantum violations of Bell inequalities.
Here we use both extrapolation and interpolation[A preliminary version of this work (without discordant states) appeared in TPNC-2014 <cit.>.]
to study the boundaries between various subsets of quantum spaces with particular emphasis on boundary separable states - separable states that are arbitrarily close to an entangled state cf. Definition <ref>.
Any pure tensor product state has entangled states
near it, at any distance (i.e. arbitrarily close), making it boundary separable (cf. corollary <ref>) . Another simple example of a boundary separable state is a Werner-state
λ/3 [ρ_ψ_++ρ_ϕ_++ρ_ϕ_-]
+ (1-λ) [ρ_ψ_-] (built from the four Bell states)
with λ = 1/2 which has
entangled states near it, at any distance.
Is the property of being separable yet having entangled states nearby at any
distance common? Or is it rare?
Furthermore, what can we learn about the type of entanglement that those nearby entangled
states have? For two qubits, it is known <cit.> that the entanglement is always distillable.
For qudits, cf proposition <ref>.
Our main result is a number of families of boundary separable quantum states. We show that states in these families are arbitrarily close to distillable entangled states and give a constructive method for identifying these states.
We also provide similar results for discord (cf. Corollary <ref> for example), showing that all classical states (classically correlated and uncorrelated) are boundary classical (in fact there is a discordant and thus non classical state arbitrarily close) and providing a constructive way to find an arbitrarily close discordant state. Our results are presented in order of complexity starting from two qubit examples and continuing to more general results involving qutrits, qudits and multi-qubit systems. In most cases the results are presented through examples. The general implications are discussed at the end of each section.
§.§ The set of quantum states
Given a Hilbert space , the set of quantum states (i.e.the set of positive semidefinite trace one Hermitian operators) on is convex. If the Hilbert space has a physically meaningful tensor product structure =_A⊗_B it is useful and interesting to consider subsets of states based on this structure. These subsets are often not convex and are hard to characterize. In the work presented here we will use convex and affine mixtures of states to study the states that lay at the boundary of these subsets.
One important subset of states is the set of pure states |Ψ⟩⟨Ψ| (i.e.states of rank 1). In this set we identify a smaller subset of pure product states of the form |Ψ⟩⟨Ψ|=|ψ⟩⟨ψ|⊗|ϕ⟩⟨ϕ|. All pure states that are not product are called entangled pure states. Pure states are extremal points in the set of all states.
There are various ways to similarly divide the set of all states. One division is into the complementary subsets: separable states and entangled states. For any
bipartite system whose Hilbert spaces are of
dimension at least 2, both sets, entangled and separable, have a finite volume in the set of all states <cit.>. The set of entangled states is not convex and in general it is hard to identify whether a state is entangled or separable
<cit.>.
At the border between separable and entangled states are the boundary separable and ϵ- entangled states. Some properties of this boundary were previously studied in relation to non-linear entanglement witnesses <cit.> where it was shown that the set of all separable states is not a polytope (see also <cit.>). Our main results are specific families of boundary separable and ϵ-entangled states; some of these families such as those close to thermal states are of particular importance in quantum computing.
There are a number of physically meaningful ways to divide the set of entangled states. One that we will use here is to divide the set into two disjoint subsets, distillable and bound-entangled states. A state ρ is distillable if it is possible to distill many copies of ρ into a maximally entangled state. Clearly separable states cannot be distilled, but surprisingly there are entangled states that cannot be distilled. These are known as bound entangled states. It is known that states with a non-positive partial transpose are bound entangled, similarly if the total dimension of the Hilbert space is not larger than 6, all entangled states are distillable (and have non-positive partial transpose) <cit.>.
A different classification of the set of all states is into the complimentary subsets: Discordant states and states that are classical with respect to A <cit.>, the latter are sometimes called classical-quantum (see Sec. <ref> for precise definitions). For simplicity we use to denote the set of states that are classical with respect to A and note that the set of classical states, i.e.those that are classical with respect to both A and B, are a subset of . The set of discordant states is . The classification into and shares some properties with the classification of pure states. For example, like the set of pure product states which is vanishingly small in the set of all pure states (i.e.it requires strictly fewer parameters to characterize a pure product state than to characterize a generic pure state), the set
is vanishingly small in the set of all states <cit.>. Moreover, for pure states, discord and entanglement coincide. However, in general we can only say that entangled states are always discordant and classical states (with respect to A, B or both) are always separable. There is, an intermediate regime of discordant-separable or dissonant states <cit.> (see fig. <ref>). As we will show below, all classical states (with respect to A, B or both) are also boundary classical, i.e. a state is either discordant or there is a discordant state arbitrarily close to it.
Although discord refers to a specific quantity <cit.> other similar quantities exist <cit.>. Like entanglement monotones each measure has its own domain but generally there is an unambiguous way to quantify states as uncorrelated (product),
classically correlated, dissonant (discordant and separable) and entangled <cit.>. A caveat on the last statement is that in general discord-like measures are not necessarily defined to be symmetric with respect to the parties involved and the cut between discordant and classical depends on this choice. Here we mostly consider the asymmetric versions that were discussed in the early literature, in particular the sets and ; however all of our results apply to the various symmetric versions of discord in <cit.>.
A final classification is in terms of the eigenvectors of the state. A state has a product eigenbasis[The term product basis should not be confused with the classical basis of <cit.> which is a product of local bases, rather than a basis of product states.] if it can be diagonalized in an orthonormal basis of product states. Surprisingly the decomposition of a state into eigenstates is not sufficient to tell us about entanglement or discord. In general non-degenerate separable states can have a non-separable eigenbasis <cit.>. On the other hand a discordant state can have a product eigenbasis.
§ NOTATIONS AND TERMINOLOGY
§.§ General notation
In the majority of cases below we consider bipartite states as operators on a Hilbert space ℋ_A⊗ℋ_B (in Sec. <ref> we also consider mulitipartite systems). We use T to denote the transpose map, similarly denotes partial transposition on A. Distance between two states ρ and τ will be given by the trace distance δ(ρ,τ)=1/2Tr|ρ-τ|.
§.§ Entanglement and separability
A convex mixture of pure states is called mixed. A quantum state ρ on ℋ_A⊗ℋ_B is called a product state if it can be decomposed into ρ=ρ_A⊗ρ_B, where ρ_A and ρ_B are states on _A and _B respectively; ρ is called separable if it can be decomposed into a convex sum
of product states, otherwise it is called entangled (cf Appendix <ref>). Product states are also separable.
§.§ Boundary Separable States and ϵ-Entangled States
A boundary separable state
is a separable state ρ_b
such that for any ϵ > 0,
there is an entangled state ρ_e for
which δ(ρ_b,ρ_e) ≤ϵ,
i.e. there are entangled states arbitrarily close to ρ_b.
Notice
that for any density operator ρ, and 0 ≤ϵ≤ 1, if
τ_ϵ = (1-ϵ)ρ_b + ϵρ
then
δ(τ_ϵ,ρ_b)
= ϵ/2Tr|ρ-ρ_b|
= ϵδ(ρ,ρ_b)
and thus
δ(τ_ϵ,ρ_b)≤ϵ .
The trace distance between τ_ϵ given by (<ref>)
and the (boundary) separable state ρ_b is at most ϵ
but it may be much smaller than ϵ;
it is ϵ iff δ(ρ,ρ_b) = 1
i.e. if ρ_b and ρ are orthogonal (have orthogonal support).
An ϵ-entangled state is an entangled state ρ_e such that there is
a boundary separable state ρ_b for which
δ(ρ_e,ρ_b)≤ϵ; i.e.
it is at trace distance at most ϵ from
a boundary separable state.
As an example, the Werner state with λ=1/2 is a boundary separable state
and mixing it with ρ_ψ_- gives ϵ-entangled states.
There are separable states ρ_b for which there exists a state ρ
such that all the states τ_ϵ given
by (<ref>)
are entangled for ϵ small enough, ϵ≠ 0.
There is a continuous path starting from ρ_b
and going straight in the direction of ρ
whose initial section contains only ϵ-entangled states.
Note that for ϵ=0 the resulting
state τ_0 is the boundary separable-state ρ_b itself; τ_0 = ρ_b.
As an example, again,
the Werner state with λ=1/2 is a boundary separable state,
such that mixing it with ρ_ψ_- as in (<ref>)
gives epsilon-entangled states, and there is a continuous path from this Werner
state and all the way to the fully entangled state ρ_ψ_-.
§.§ “Extrapolated States” and “Void States”
Given any two states ρ_0 and ρ_1, the operators
ρ_t = (1-t)ρ_0 + tρ_1 are clearly always Hermitian with trace 1;
when 0≤ t ≤ 1, they are (mixed) states, all on a straight line segment between ρ_0 and ρ_1;
those mixed states are obtained by interpolation (convex combination) of two states.
Let us now introduce three additional definitions:
itno
itno)
itno
* When t<0, ρ_t is on the same straight line
but is no longer between ρ_0 and ρ_1;
in general, if ρ_0 ≠ρ_1 and all the eigenvalues
of ρ_0 are strictly positive, then there are values
of t<0 such that ρ_t is a state;
we call such states extrapolated states.
Note that if ρ_0 = |0⟩⟨0| and
ρ_1 = |1⟩⟨1|, then
(1-t)ρ_0 + tρ_1 = (1-t)|0⟩⟨0| + t|1⟩⟨1| is
not a state (it is not positive semi definite) as soon as t<0
(or t>1).
There may be some value m<0 such that
ρ_t is no longer positive semi-definite for t<m,
thus no longer a state (hence it is not a physical entity),
while it is still positive semi-definite for t=m.
The condition that the eigenvalues of ρ_0 be all positive
is sufficient for defining extrapolated states,
but not necessary.
One can extrapolate carefully-chosen states that
have some 0 eigenvalues.
Extrapolation somewhat behaves like subtraction:
if t<0, then ρ_t = (1+|t|)ρ_0 - |t|ρ_1.
We will be interested only with extrapolations with t < 0
though t>1 could also provide extrapolations.
* A void state
is a quantum state that has exactly one zero eigenvalue.
Namely, when diagonalized, it has exactly one zero on the diagonal.
* A k-void state (of dimension N>k)
is a quantum state that has exactly k zero eigenvalues[Note that a separable N-1-void state is a tensor product state.] (similarly, it has rank N-k).
§.§ Discord and classical correlation
We consider a state ρ of a bipartite system AB with marginals ρ_A and ρ_B.
Let ρ be a state of ℋ_A⊗ℋ_B and {|i⟩} be a basis of
ℋ_A. Then the following three statements are equivalent <cit.>.
* There is a set of states {τ_i} on _B such that
ρ=∑λ_i|i⟩⟨i|⊗τ_i,
with λ_i≥0 and ∑_i λ_i =1.
* There is a set of unitary operators U_i such that
ρ=∑_i,jμ_ij|i⟩⟨i|⊗ U_i|j⟩⟨j|U_i^†,
where μ_ij≥0 are the eigenvalues of ρ.
* ρ is invariant under the action of the local dephasing channel D defined by
D(|i_1⟩⟨i_2|⊗τ) = δ_i_1i_2 |i_1⟩⟨i_2|⊗τ on the basis {|i⟩}:
D(ρ)=ρ.
The state ρ is said to be classical with respect to the basis {|i⟩} of ℋ_A if it satisfies one of the above conditions.
These conditions imply that the decomposition ρ_A=∑_iλ_i|i⟩⟨i| with all λ_i≥0 is a necessary condition for ρ to be classical in the basis {|i⟩} <cit.>.
A state ρ is said to be classical with respect to A if there is a basis of ℋ_A with respect to which it is classical;
the set of classical states with respect to A is denoted C_A.
A state ρ which is not in C_A is called discordant <cit.> [The term classical is used in a variety of ways in the literature, here we use it in the sense of correlations as in <cit.>]. The set of discordant states is .
It is important to notice that all classical states (i.e.classical with respect to both A and B) are in C_A and thus that a state not in C_A (i.e. discordant) cannot be classical.
We will use that fact to build non classical (in fact, discordant) states arbitrarily close to any classical state.
Any state which is not product is called correlated. The set contains both correlated and uncorrelated states while all states in are correlated. These are sometimes called quantum correlated <cit.>.
When ρ_A=∑_iλ_i|i⟩⟨i| is non degenerate, i.e., λ_iλ_j if i≠ j, the conditions above provide a very simple method to check if ρ is in C_A <cit.>. When ρ_A is degenerate one has to check over all its possible eigenbases.
§.§.§ Boundary classical states
In the same way as above it is possible to define boundary classical states and ϵ-discordant states. As we will see this definition is superfluous since all
classical states are also boundary classical; cf. Corollary <ref>.
§ TWO QUBITS
Our first example of
2-party boundary separable-states (and the derived
ϵ-entangled states)
is obtained
by starting from a completely mixed state
and “fully subtracting” one of the eigenstates, to obtain
a separable void state.
Our second example uses a different—yet very interesting state
to start with—the thermal state. As in the first example,
a void-state is generated from the thermal state (via extrapolation)
by subtracting one of the eigenstates.
Our third example uses a 2-void state
instead of a simple (1-)void state (and we also discuss
here the case of 3-void state which in this case is a tensor product state).
Our last two 2-qubit examples provide generalizations to less
trivial cases including discordant states and separable states without a product eigenbasis. Since two qubit states that are entangled are all distillable <cit.>, the states
obtained are thus also distillable.
§.§ Example 1 – The Extrapolated Pseudo-Pure State of Two Qubits
Mixing a completely mixed state ρ_0 with an arbitrary state ρ_1 to yield the pseudo pure state (PPS) ρ = (1-t) ρ_0 + t ρ_1 is found to be extremely useful in quantum information processing (e.g. in NMR quantum computing). To the best of our knowledge, an extrapolated state of the form
ρ = (1+|t|) ρ_0 - |t| ρ_1 was never used.
This “extrapolated pseudo pure state" (EPPS), whenever it is a legal quantum state, shares with the conventional PPS the fact that applying any unitary transformation acts only on ρ_1.
An interesting special case of this EPPS is when |t| is exactly sufficiently large to make one eigenvalue vanish
(become zero).
If ρ_1 is a pure tensor product state, then the resulting ρ is a void state. We assume here that the
subtracted tensor product state is written in the computational
basis, e.g., it is 1111 and m=t=-1/3.
If the standard basis is the eigenbasis of a state ρ on ℋ_2⊗ℋ_2, and if the eigenvalue of
11 is 0, and the other
three eigenvalues are 1/3,
then there are states arbitrarily close
to ρ that are entangled. [The same holds,
with obvious adjustments, for any
other tensor-product eigenstate that has a zero eigenvalue.]
We avoid proving this proposition as we
later (in example 4) prove a more general result,
containing the above (and also example 2) as special cases.
The above
mentioned
(very basic)
example is mainly given
for historical reasons, as it was the first example
we found.
For j fixed, let
ρ = 4/3[1/4∑_i=0^3 |i⟩⟨i|] - 1/3|j⟩⟨j|
= 1/3∑_i=0;i j^3 |i⟩⟨i|.
This is obtained
by choosing |j⟩ (viewed as a two bit integer from 0=00_2 to 3=11_2) to be any product state
j≡ j_AB = j_A⊗ j_B,
where the two parties are A for Alice's qubit
and B for Bob's. In fact, for all values of t between 0 and -1/3, the Hermitian operators
ρ_t = (1-t)[1/4∑_i=0^3 |i⟩⟨i|]+t|j⟩⟨j|
are separable states; for t<-1/3, ρ_t is no longer a state since it is no longer positive semi definite,
the eigenvalue of |j⟩ becoming negative. Finally, if |j⟩ = 11, proposition <ref>
tells us that there are
entangled states arbitrarily close to 1/3∑_i=0^2 |i⟩⟨i|.
§.§ Example 2 – The Thermal State of Two Qubits
The thermal state on two qubits is the state
ρ_Θ = (1+η)^2/40000 + 1-η^2/4[0101
+ 1010]
+ (1-η)^2/41111.
The state 11 is a 0-eigenstate of ρ_p = (1+p)ρ_Θ - p1111 if
(1-η)^2(p+1) = 4p and
a proposition similar to proposition <ref> can be written for ρ_p.
However, both cases of Sections <ref> and <ref> will be dealt with, by a generalization done in example 4.
The thermal state will get more attention later on, when we discuss N qubits.
§.§ Example 3 — 2-Void State
Example 3, using a 2-void state, is as follows:
In ℋ_2⊗ℋ_2 there are entangled states arbitrarily close to the state ρ =1/2[0101 + 1010].
Here again, 11 is an eigenstate of ρ of 0 eigenvalue.
Let ρ_1 = |ψ_+⟩⟨ψ_+| with |ψ_+⟩ = 1/√(2)[01 + 10] and ρ_ϵ = (1-ϵ)ρ + ϵρ_1.
Then (ρ_ϵ), where T is the transpose operator,
is
[ 0 0 0 0; 0 1/2 ϵ/2 0; 0 ϵ/2 1/2 0; 0 0 0 0 ]
=
[ 0 0 0 ϵ/2; 0 1/2 0 0; 0 0 1/2 0; ϵ/2 0 0 0 ]
with characteristic equation (λ -1/2)^2(λ^2 - ϵ^2/4) = 0 and eigenvalues 1/2, ϵ/2 and -ϵ/2; by
the Peres criterion[Although the Peres Criterion is well known, it is provided
for completeness of the exposition in appendix <ref>.]
<cit.>, ρ_ϵ is thus entangled for all 1> ϵ > 0 and,
of course,
δ(ρ,ρ_ϵ) ≤ϵ.
In fact, there was no need to solve the characteristic equation to show that (ρ_ϵ) is not positive semi definite. That
can be seen directly from the matrix of (ρ_ϵ) because there is a 0 on the main diagonal for which the corresponding row and column
are not zero:
This is a consequence of the following well known lemma with |ϕ⟩ = 11 and |ψ⟩=00;
indeed, since by very definition of the partial transpose ⟨i_1j_1|(ρ_ϵ)|i_2j_2⟩ =
⟨i_2j_1|ρ_ϵ|i_1j_2⟩, it follows that
11 (ρ_ϵ) 11 = 11ρ_ϵ11= 0 and 11 (ρ_ϵ) 00
= 01ρ_ϵ10≠ 0.
Let A be a Hermitian operator on ℋ; if there are |ϕ⟩ and |ψ⟩ such that ⟨ϕ|A|ϕ⟩=0 and ⟨ϕ|A|ψ⟩≠ 0 then
A is not positive semi definite.
See appendix <ref>.
§.§ Example 4 — A Generalization
Example 4 generalizes examples 1, 2 and 3:
If the standard basis is the eigenbasis of a state ρ on ℋ_2⊗ℋ_2, and if the eigenvalue of
11 is 0, then there are states arbitrarily close to ρ that are entangled. The same holds for any
other eigenstate and any product eigenbasis.
Let indeed
ρ= λ_00 |⟩⟨| + λ_01 |⟩⟨| + λ_10 |⟩⟨|,
i.e. |⟩ has eigenvalue λ_11 = 0. Let
ρ_1 = ρ_ψ_+=
1/2[ |⟩⟨| + |⟩⟨| + |⟩⟨| + |⟩⟨|]
and ρ_ϵ = (1-ϵ)ρ + ϵρ_1.
The partial transpose (|i_1j_1⟩⟨i_2j_2|) on basis states being equal to |i_2j_1⟩⟨i_1j_2|, it is
clear that (ρ) = ρ.
The partial transpose of ρ_1 is
(ρ_1) = 1/2[ |⟩⟨| + |⟩⟨| + |⟩⟨| + |⟩⟨|].
If follows that
11 (ρ_ϵ) 11 = 0, 11 (ρ_ϵ) 00 = ϵ/2 ;
by lemma <ref>, (ρ_ϵ) is not positive semi definite if ϵ > 0 and by the Peres criterion
it follows that the state ρ_ϵ is
then not separable; since δ(ρ,ρ_ϵ)≤ϵ, there are states arbitrarily close to ρ that
are not separable.
Notice that all that is needed is that λ_11=0. Nothing prevents λ_10 = λ_10 = 0.
That implies, after a suitable choice of basis for the two systems, that any pure product state has
arbitrarily close entangled states;
being two qubit states, they are also distillable <cit.>, showing that there are arbitrarily close distillable
states.
By symmetry, the result clearly holds if any of the other eigenvalues is known to be 0 instead of λ_11. Moreover the choice of product basis is arbitrary and the same argument applies for any product basis.
§.§ Example 5. A discordant product state with a non-product eigenbasis
Take the discordant state
ϱ = 1/2[ |⟩⟨| + |⟩⟨|],
where, this time, the representation is not spectral because |⟩, |⟩,
are not part of an orthonormal basis of ℋ_2⊗ℋ_2; neither |⟩ nor |⟩
is an eigenvector of ϱ. The state ϱ is equal to
1/2[ |⟩⟨| + 1/4∑_ijkl|ij⟩⟨kl|]
and is represented, in the standard basis, by the matrix
1/8[ 5 1 1 1; 1 1 1 1; 1 1 1 1; 1 1 1 1 ]
The state |⟩ being equal to 1/2[ |⟩ + |⟩ + |⟩ + |⟩, it is easy to check that the states |ψ_0⟩ = |⟩ + |⟩/√(3)
and |ψ_1⟩ = |⟩ - |⟩ are normalized and orthogonal, and also that
ϱ = 3/4|ψ_0⟩⟨ψ_0| + 1/4|ψ_1⟩⟨ψ_1|.
Since the spectral decomposition is unique, this shows that the spectral decomposition of ϱ has eigenvectors that are not separable
even though ϱ is itself separable: separability of a state does not imply that its eigenbasis is made out of separable states.
As we will see in sections <ref> and <ref> the absence of a separable eigenbasis is not a necessary and sufficient condition for discord.
If ρ =1/2[ |⟩⟨| + |⟩⟨|] and ρ_ϕ_+ = |ϕ_+⟩⟨ϕ_+| with |ϕ_+⟩ = 1/√(2)[
|⟩ + |⟩], then for 0 < ϵ≤ 1, the states ρ_ϵ = (1-ϵ)ρ + ϵρ_ϕ_+ are all entangled.
Clearly (ρ) = ρ. Also
(ρ_ϕ_+) = 1/2[ |⟩⟨| + |⟩⟨| +
|⟩⟨| + |⟩⟨|]
Let |ψ_-⟩ = 1/√(2)[|⟩-|⟩]. A simple calculation shows that
⟨ψ_-|(ρ_ϕ_+)|ψ_-⟩ = -1/2 and ⟨ψ_-|ρ|ψ_-⟩ = 0
It follows that ⟨ψ_-|(ρ_ϵ)|ψ_-⟩ = - ϵ/2 which shows directly that the partial transpose
of ρ_ϵ
is not positive semi-definite for ϵ>0. The state ρ_ϵ is consequently entangled
and the discordant state ρ is boundary separable.
§.§ Example 6 - A Classical-Quantum state
As mentioned in the introduction (sec <ref>) the definition of discord is asymmetric. The following state is classical with respect to A, but it is not
classical with respect to B; it becomes discordant under the interchange of the subsystems A ↔ B. Such states are often called
classical-quantum <cit.>.
Let
ρ= λ_00 0000 + λ_01 0101 +
λ_1+ 1+1+ + λ_1- 1-1-
If any of the eigenvalues is 0, then there are states arbitrarily close to ρ that are entangled.
This time we first prove if λ_00 = 0 i.e.
if
ρ= λ_01 |01⟩⟨01| + λ_1+ 1+1+ + λ_1- 1-1-.
Let again
ρ_1 =
1/2[0101 + 0110 +
1001 + 1010] and ρ_ϵ = (1-ϵ)ρ + ϵρ_1. Then
00(ρ_ϵ)00 = 0 and
00(ρ_ϵ)11 = ϵ/2 so that ρ_ϵ is not
positive semi-definite by Lemma <ref> and ρ_ϵ is thus entangled by
the Peres criterion.
Had we written explicitly the matrix, we would have seen the following pattern
(ρ_ϵ) = 00 01 10 1100 0 ϵ/201 10 11 ϵ/2
with a 0 entry on the main diagonal for which the line is not identically 0 and concluded that (ρ_ϵ) is
not positive semi definite if ϵ≠ 0.
In this proof, it was assumed that λ_00 = 0 but the same result
holds if the eigenvalue of any other basis element is 0; for instance, if the eigenvalue of 1- is
0, then applying X⊗ XH maps the basis onto itself and 1- onto 00; for 1+ we need to apply X⊗ H,
and for |01⟩ we apply I⊗ X.
The state in equation (<ref>) has a product eigenbasis. We can make it discordant by interchanging the subsystems A ↔ B in which case it is a discordant state with a product eigenbasis.
§.§ Two qubits - discussion
In this section we provided a number of examples of boundary separable states. More generally, we showed (Theorem <ref>) that any two qubit state which has a product basis and is not full rank is on the boundary. We also showed that separable states may have eigenstates which are not separable (example 5) and that discordant states can have a separable eigenstates (example 6).
§ TWO QUTRITS
Some of the subtleties of bipartite systems cannot be seen in qubit-qubit pairs and qubit-qutrit pairs. These include UPBs and bound entangled states <cit.> and locally indistinguishable product states <cit.>.
§.§ A mixture of locally indistinguishable product states
Consider the following states on a bipartite system where |a± b⟩ denotes the state 1/√(2)[|a⟩±|b⟩].
[ |ψ_1⟩= |1⟩ |1⟩; |ψ_2⟩= |0⟩ |0+1⟩; |ψ_3⟩= |0⟩ |0-1⟩; |ψ_4⟩= |2⟩ |1+2⟩; |ψ_5⟩= |2⟩ |1-2⟩; |ψ_6⟩= |1+2⟩ |0⟩; |ψ_7⟩= |1-2⟩ |0⟩; |ψ_8⟩= |0+1⟩ |2⟩; |ψ_9⟩= |0-1⟩ |2⟩ ]
The state ρ_0 = 1/8 ∑_i=2^9 |ψ_i⟩⟨ψ_i| is boundary separable.
Let |Ψ_+⟩ = 1/√(2)[|01⟩ +|10⟩] and ρ_1 = |Ψ_+⟩⟨Ψ_+|.
Thus ρ_1 = 1/2[ |01⟩⟨01| + |10⟩⟨10| + |01⟩⟨10| + |10⟩⟨01|].
Let ρ_ϵ = (1-ϵ)ρ_0 + ϵρ_1. For 2≤ i ≤ 9, ⟨01|ψ_i⟩ = 0 or
⟨ψ_i|10⟩ = 0 so that ⟨01|ρ_0|10⟩ = 0 and thus ⟨01|ρ_ϵ|10⟩ = ϵ/2.
Also ⟨11|ρ_0|11⟩ = ⟨11|ρ_1|11⟩ = 0 and so ⟨11|ρ_ϵ|11⟩ = 0. It follows that
⟨11|(ρ_ϵ)|11⟩ = ⟨11|ρ_ϵ|11⟩ = 0 and
⟨11|(ρ_ϵ)|00⟩ = ⟨01|ρ_ϵ|01⟩ = ϵ/2 ≠ 0 and thus, for 0<ϵ <1
ρ_ϵ is entangled.
See appendix <ref> for a matrix based argumentation.
The states in eq (<ref>) cannot be distinguished using local operations <cit.> a property sometimes called non-locality without entanglement. In <cit.> it was shown that given a set of states and a prior probability distribution, there is no relation between discord in the resulting mixed state and this property. Similarly a mixture of these 9 states is generally discordant with a product eigenbasis.
§ TWO QUDITS (QUANTUM DIGITS)
We now consider bipartite systems, with each part of dimension at least two.
§.§ The ball of separable states
All states ρ in the finite ball Tr|ρ-1/d|<1/d are separable and not boundary separable.
We know from the Gurvits-Barnum bound <cit.> that all states ρ with ρ-1/d_2<1/d are separable. Using the fact that ·_2 ≤ Tr|·| we can see that the maximally mixed state and any state close enough to it is not boundary separable.
There are separable states that are not boundary separable.
§.§ A general class of boundary separable states
We now consider states of bipartite systems ℋ_A⊗ℋ_B for which
ℋ_A≥ 2 and ℋ_B≥ 2.
Let ρ_0 be a state such that ⟨00|ρ_0|00⟩ = 0 and ⟨10|ρ_0|01⟩ = 0. Let
ρ_1 be such that ⟨00|ρ_1|00⟩ = 0 and ⟨10|ρ_1|01⟩ = re^iϕ with r>0.
Then ρ_ϵ = (1-ϵ)ρ_0 + ϵρ_1 is entangled and distillable for all 0 < ϵ≤ 1.
The conditions on ρ_0 and ρ_1 imply that
⟨00|ρ_ϵ|00⟩ = (1-ϵ)⟨00|ρ_0|00⟩ + ϵ⟨00|ρ_1|00⟩ = 0,
⟨10|ρ_ϵ|01⟩ = (1-ϵ)⟨10|ρ_0|01⟩ + ϵ⟨10|ρ_1|01⟩ =ϵ re^iϕ.
From (<ref>) and (<ref>) it follows that
⟨00|(ρ_ϵ)|00⟩ = 0 and
⟨00|(ρ_ϵ)|11⟩ = ϵ re^iϕ≠ 0 and consequently (ρ_ϵ) is not positive semi-definite and ρ_ϵ is entangled.
We can do better.
Let
|Ψ_θ⟩ = sin(θ)|11⟩-e^iϕcos(θ)|00⟩. Then, since ⟨00|ρ_ϵ|00⟩ = 0, ⟨01|ρ_1|10⟩ = ⟨10|ρ_1|01⟩ = re^-iϕ and
⟨Ψ_θ| = sin(θ)⟨11| -e ^-iϕ⟨00|, il follows that
⟨Ψ_θ| (ρ_ϵ)|Ψ_θ⟩
= sin^2(θ)⟨11|ρ_ϵ|11⟩ -2sin(θ)cos(θ)rϵ.
If ⟨11|ρ_ϵ|11⟩=0, the result is negative for 0 < θ < π/2. Else
for all rϵ≠ 0 there exits θ such that ⟨11|ρ_ϵ|11⟩ < 2(θ)rϵ
i.e.such that ⟨Ψ_θ|(ρ_ϵ)|Ψ_θ⟩ < 0,
which implies that ρ_ϵ is entangled and distillable by a lemma of <cit.> (cf. appendix <ref>).
We shall now consider separable void states with a separable 0 eigenvector. They are not only boundary separable
but independently of the dimensions (larger than 2), they have arbitrarily close distillable entangled states.
Let ρ be a separable state of a bipartite system ℋ_A⊗ℋ_B (ℋ_A ≥ 2,
ℋ_B≥ 2) that has a product state |ϕ_0ψ_0⟩ as eigenstate with 0 eigenvalue.
Then ρ is boundary separable; moreover there are entangled states arbitrarily close to ρ that are distillable.
We may assume that |ϕ_0⟩ = |0⟩_A and |ψ_0⟩ = |0⟩_B (cf Appendix <ref> and <ref>) so that
|ϕ_0ψ_0⟩ = |00⟩ (dropping the indices A and B, as was done till now) and perform the partial transpose
using the basis |0⟩, |1⟩, etc, of ℋ_A.
Since ρ is separable, (ρ) is a state and, from ⟨00|(ρ)|00⟩=
⟨00|ρ|00⟩ = 0,
it follows that |00⟩ is a 0 eigenvector of (ρ) and thus ⟨00|(ρ)|11⟩ = 0
i.e. ⟨10|ρ|01⟩ = 0.
Let now ρ_1 be any state such that
⟨00|ρ_1|00⟩ = 0 and ⟨10|ρ_1|01⟩≠ 0.
The conditions of Lemma <ref> are satisfied and ρ_ϵ = (1-ϵ)ρ + ϵρ_1 is entangled
and distillable for all 0 < ϵ≤ 1.
All pure product states are boundary separable.
§.§ Discordant states
: If ρ and τ are classical with respect to the basis {|i⟩} then so is
the state (1-t)ρ+tτ for any valid t.
That follows directly from Proposition <ref>.
: The state (1-t)/d+tρ is discordant if and only if ρ is discordant,
where is the identity of
ℋ_A⊗ℋ_B and d =(ℋ_A⊗ℋ_B).
This follows directly from statement 4 in Proposition <ref> and the fact that is invariant under any dephasing channel.
Consider the
state
ρ=∑_i,jμ_ij|i⟩⟨i|⊗ U_i|j⟩⟨j|U_i^† ,
with the standard basis chosen such that the smallest eigenvalue is μ_00 and U_0=I.
Let ρ_1 be any state of the bipartite system s.t. ⟨00|ρ_1|00⟩ = 0 and ⟨10|ρ_1|01⟩≠ 0;
then the state
ρ_ϵ=(1-ϵ)ρ+ϵρ_1 is a discordant state for all 0<ϵ≤ 1.
Let
ρ_v=∑_i,j1/1-dμ_00(μ_ij-μ_00)|i⟩⟨i|⊗ U_i|j⟩⟨j|U_i^†
if ρ≠/d (the μ_ij are not all equal),
else let ρ_v = |00⟩⟨00|;
then ρ = zρ_v + (1-z)/d for z = 1-dμ_00, 0≤ z≤ 1,
⟨00|ρ_v|00⟩ = 0 and ⟨10|ρ_v|01⟩ = 0.
From proposition <ref> we know that
ρ_ϵ is discordant if and only if the state ρ̂_ϵ,z = kρ_ϵ,z obtained by normalizing ρ_ϵ,z = (1-ϵ)zρ_v + ϵρ_1 is discordant.
It holds that ρ̂_ϵ,z = (1-ϵ')ρ_v + ϵ'ρ_1 with ϵ' = kϵ
(0 < ϵ' ≤ 1).
By Lemma <ref>,
ρ̂_ϵ,z is entangled (and distillable), so that ρ_ϵ is discordant and ρ is boundary
classical.
All classical states are boundary classical.
We note that one could arrive at corollary <ref> by using the fact that the set of classical states is nowhere dense <cit.>.
Proposition <ref> provides a method to construct ϵ-discordant states. If a classical state ρ (or any state ρ∈C_A)
is not boundary separable the ϵ-discordant state is also dissonant (i.e.it is separable). In general there is no direct relation between discord and boundary separable states.
There are classical states (and states in C_A) that are boundary separable and discordant states that are not boundary separable
Most of the examples of boundary separable states above are classical with respect to A. Moreover all pure product states are also uncorrelated and boundary separable.
From proposition <ref> and we know there are discordant states arbitrarily close to the maximally mixed states. From proposition <ref> we know that these states are not boundary separable.
§.§ Two qudits - discussion
In this section we showed that there are separable states that are not boundary separable (corollary <ref>). We then presented a general class of boundary separable states and showed that in general all pure product states are boundary separable (Corollary <ref>). Finally we discussed depolarized discordant states (Theorem <ref>), showed that all classical states are boundary classical (Corollary <ref>) and that there is no direct relation between discord and boundary separable states.
§ MULTIPLE QUBITS
§.§ Extrapolated Pseudo-Pure States of N Qubits
Let us consider states of the form
ρ_t = (1-t) I/2^N + t 11… 111… 1
where I is the identity matrix, but this time of size 2^N× 2^N, and t < 0. With (1-t_b) + 2^Nt_b = 0 i.e. t_b = -1/2^N-1,
ρ_b = ρ_t_b becomes a 1-void state, with 11… 1 as 0-eigenvector. The states ρ_t for t_b ≤ t ≤ 0 are all
clearly separable; their matrix is diagonal in the standard basis, with non negative eigenvalues. Only the eigenvalue of 11… 1 decreases.
§.§.§ ρ_b Is a Boundary Separable State.
We choose arbitrarily the first bit and
show that there are ϵ close entangled states for which the first qubit is entangled with the others. Let
|1⟩ = |^N-1⟩, i.e. N-1 bits equal to one.
The eigenstate of ρ_b with 0 eigenvalue can be written as |^N⟩ = |⟩|1⟩ and
Proposition <ref> applies.
§.§.§ Trace Distance Between ϵ-Entangled States and the Completely Mixed State.
The trace distance between ρ_b and I/2^N is
1/2| (1-t_b)I/2^N + t_b|^N⟩⟨^N| - I/2^N|
= |t_b|/2| I/2^N - |^N⟩⟨^N||.
The trace of |I/2^N - |^N⟩⟨^N|| is (2^N-1)× 1/2^N + 1 - 1/2^N = 2 - 2/2^N.
The trace distance is thus
δ(I/2^N, ρ_b) = 1/2^N-1(1-1/2^N) = 1/2^N
.
Conclusion: for any ϵ >0 there are entangled states at distance at most 2^-N+ϵ of the completely mixed state. Indeed, by the triangle inequality,
δ(I/2^N,ρ_ϵ) ≤δ(I/2^N,ρ_b) + δ(ρ_b,ρ_ϵ) ≤ 2^-N + ϵ
.
§.§ The N Qubit Thermal State
The thermal state of one qubit is
ρ_Θ = [ 1+η/2 0; 0 1-η/2 ] = 1+η/2 |0⟩⟨0| + 1-η/2 |1⟩⟨1|.
The thermal state of N independent qubits (with the same η) is
ρ^N_Θ = ρ_Θ^⊗ N = ∑_i∈{0,1}^N(1+η/2)^N-|i|(1-η/2)^|i||i⟩⟨i|.
where |i| is the Hamming weight of the string i, i.e. the number of bits equal to 1 in i, each 1 giving a minus sign, and each 0
a plus sign.
The thermal state is not only separable but it has an eigenbasis consisting of product states
The smallest eigenvalue is given by the eigenvector |i⟩ = |1^N⟩, i.e. all qubits are 1 and it is
λ_|1^N⟩ = (1-η/2)^N
which is exponentially small with N.
§.§.§ Extrapolated States Close to the Thermal State.
Let us consider the extrapolated states
ϱ_t = (1-t)ρ_Θ^N + t |1^N⟩⟨1^N|
for t<0 (t = -p for some positive real number p). They are all separable and when the eigenvalue of |1^N⟩⟨1^N| becomes 0, ϱ_t is a void state.
That happens when (1-t)[(1-η)/2]^N + t = 0 i.e
t_b= - λ_|1^N⟩/1-λ_|1^N⟩ = -λ_|1^N⟩ - λ_|1^N⟩^2 - …
a very small value, equal to -λ_|1^N⟩ = -((1-η)/2)^N if we neglect terms of higher order.
The trace distance between ϱ_b and ρ_Θ^N is
δ(ϱ_b,ρ_Θ^N) = 1/2|(1-t_b)ρ_Θ^N + t_b|1^N⟩⟨1^N| - ρ_Θ^N|
= |t_b|/2|ρ_Θ^N - |1^N⟩⟨1^N||.
The eigenvectors of ρ_Θ^N - |1^N⟩⟨1^N| are those of ρ_Θ^N and the eigenvalues are left unchanged except
for the eigenvector |1^N⟩ whose eigenvalue of λ_|1^N⟩ is decreased by 1 which implies that the sum of the
absolute values of the eigenvalues is increased by 1 - λ_|1^N⟩ and
δ(ρ_Θ^N, ϱ_b) = |t_b|/2( 2 - λ_|1^N⟩)
= 1/2λ_|1^N⟩/ 1- λ_|1^N⟩( 2 - λ_|1^N⟩)
= 1/2(λ_|1^N⟩ + λ_|1^N⟩/1-λ_|1^N⟩)
=λ_|1^N⟩ + 1/2λ_|1^N⟩^2 + 1/2λ_|1^N⟩^3 …
which is λ_|1^N⟩ if we neglect terms of higher order. That distance is exponentially small with N.
§.§.§ ϱ_b Is a Boundary Separable State.
We now show that there are entangled states arbitrarily close to ϱ_b.
We choose again arbitrarily the first bit and
show that there are ϵ close entangled states for which the first qubit is entangled with the others. Let
|1⟩ = |^N-1⟩, i.e. N-1 bits equal to one, and let |v⟩ be any N-1 bit string with at least one
bit equal to zero. The eigenstate of ϱ_b with 0 eigenvalue is |^N⟩ = |⟩|1⟩.
Proposition <ref> applies again.
§.§.§ Entangled States Close to the Thermal State
We have just proven that or any ϵ > 0, there are
entangled states ϱ_ϵ such that δ(ϱ_b,ϱ_ϵ) ≤ϵ.
By the triangle inequality (since the trace distance is a distance in the sense of metric spaces),
the distance between those states ϱ_ϵ and ρ_Θ^N is such that
δ(ρ_Θ^N,ϱ_ϵ) ≤δ(ρ_Θ^N,ϱ_b) + δ(ϱ_b,ϱ_ϵ) ≤δ(ρ_Θ^N,ϱ_b) + ϵ
which implies that for any ϵ >0 there are entangled states in a ball of trace-distance radius
ϵ + (1-η/2)^N + 1/2(1-η/2)^2N + 1/2(1-η/2)^3N…
around the thermal state ρ_Θ^N of N qubits where (1-η/2)^N = λ_|1^N⟩ is exponentially small in N.
§ DISCUSSION
We used extrapolation and interpolation to study the boundaries of some subsets of states and to make some connections between different notions of entanglement and quantum correlations. The majority of our results concern boundary separable states. We showed various classes of these states that play a significant role in quantum computing. In particular the classically correlated states in the computational basis and thermal states.
Our results are related to results on robustness against various types of noise. States near the boundary are generally more fragile than those far away from it. It is then interesting to note that although thermal states are not entangled they can become entangled by small fluctuation in the right direction, moreover the entanglement is distillable.
§.§ Discord and entanglement
While discord and entanglement are very different from an operational perspective <cit.>, pure state entanglement and discord (for all states) share many similar mathematical properties <cit.>. Many of these appeared in the work above, in particular in the property of boundaries. All classical states
are boundary classical (Corollary <ref>), similarly all pure product are at the boundary of the set of entangled states (Corollary <ref>). This opens a number of interesting questions regarding operations on pure and mixed states. For example we showed in proposition <ref>
that mixing any classical state (and any classical state with respect to A) with an entangled state will make it discordant, similarly mixing a pure product state with an entangled state will make it entangled (Corollary <ref>). However the former is not an extension of the latter into mixed states since both take pure states to a mixed states. If one considers unitary operations on pure states there is come discrepancy. Universal entanglers (unitary operations that entangle all pure states) are known to exits only in higher dimensions <cit.>.
§.§ Quantum computing
It is currently an open question whether it is possible to efficiently simulate all quantum computations that produce (or consume) no entanglement <cit.>. Surprisingly, it is not even clear if it is possible to simulate all quantum computation that produce (or consume) no discord. Boundary (classical or separable) states may play a critical role in these types of simulations since even small errors can cause the states to become discordant or entangled. This issue was pointed out for the case of discord free (concordant) computation in <cit.> where the entire computation happens on the boundary (see also <cit.>).
A second issue is related to the entangling power of mixed state quantum computers, where the initial state is fixed. Here we showed that the thermal states used in various mixed state models can become entangled after a small perturbation. However, we did not discuss the physical mechanism for such perturbations. In a follow up paper <cit.> we explore perturbations due to unitary operations that are ϵ close to the identity. A question for future research involves the possible use of thermal states and some non-local operations to distill entanglement. This question is especially important in the setting of NMR quantum computing.
§ ACKNOWLEDGMENTS
MB was partly supported by NSERC and FCAR through INTRIQ.
AB was partly supported by NSERC, Industry Canada and CIFAR.
TM was partly supported by the Israeli MOD. AB and TM were partly supported The Gerald Schwartz & Heather Reisman Foundation. AB is currently at the Center for Quantum Information and Quantum Control at the University of Toronto.
spbasic
26
urlstyle
[Acín et al(2010)Acín, Augusiak, Cavalcanti, Hadley, Korbicz,
Lewenstein, Masanes, and Piani]nonsig
Acín A, Augusiak R, Cavalcanti D, Hadley C, Korbicz JK, Lewenstein M, Masanes
L, Piani M (2010) Unified framework for correlations in terms of local
quantum observables. Phys Rev Lett 104:140,404,
10.1103/PhysRevLett.104.140404
[Bengtsson and Życzkowski(2006)]geometrybook
Bengtsson I, Życzkowski K (2006) Geometry of quantum states: an introduction
to quantum entanglement. Cambridge University Press
[Bennett et al(1999a)Bennett, DiVincenzo, Fuchs, Mor,
Rains, Shor, Smolin, and Wootters]NLWE
Bennett CH, DiVincenzo DP, Fuchs CA, Mor T, Rains E, Shor PW, Smolin JA,
Wootters WK (1999a) Quantum nonlocality without entanglement.
Phys Rev A 59:1070–1091, 10.1103/PhysRevA.59.1070
[Bennett et al(1999b)Bennett, DiVincenzo, Mor, Shor,
Smolin, and Terhal]UPB
Bennett CH, DiVincenzo DP, Mor T, Shor PW, Smolin JA, Terhal BM
(1999b) Unextendible product bases and bound entanglement. Phys
Rev Lett 82:5385–5388, 10.1103/PhysRevLett.82.5385
[Boyer and Mor(2014)]TPNC-2014
Boyer M, Mor T (2014) Extrapolated states, void states, and a huge novel class
of distillable entangled states. In: Dediu AH, Lozano M, Martín-Vide C (eds)
Theory and Practice of Natural Computing, Lecture Notes in Computer Science,
vol 8890, Springer International Publishing, pp 107–118,
10.1007/978-3-319-13749-0_10
[Boyer Brodutch and Mor(2017)]BBM2017
Boyer M, Brodutch A, Mor T (2017) Entanglement and deterministic quantum computing with one qubit
arXiv:1606.05283
[Braunstein et al(1999)Braunstein, Caves, Jozsa, Linden, Popescu, and
Schack]ball
Braunstein SL, Caves CM, Jozsa R, Linden N, Popescu S, Schack R (1999)
Separability of very noisy mixed states and implications for nmr quantum
computing. Phys Rev Lett 83:1054–1057, 10.1103/PhysRevLett.83.1054
[Brodutch(2013)]ABPRA
Brodutch A (2013) Discord and quantum computational resources. Phys Rev A
88:022,307, 10.1103/PhysRevA.88.022307
[Brodutch and Terno(2010)]BTdemons
Brodutch A, Terno DR (2010) Quantum discord, local operations, and maxwell's
demons. Phys Rev A 81:062,103, 10.1103/PhysRevA.81.062103
[Brunner et al(2014)Brunner, Cavalcanti, Pironio, Scarani, and
Wehner]BellRMP
Brunner N, Cavalcanti D, Pironio S, Scarani V, Wehner S (2014) Bell
nonlocality. Rev Mod Phys 86:419–478, 10.1103/RevModPhys.86.419
[Cable and Browne(2015)]CableBrowne
Cable H, Browne D (2015)
Exact and efficient simulation of concordant computation
N. J. Phys. 17:113049 10.1088/1367-2630/17/11/113049
[Chen et al(2008)Chen, Duan, Ji, Ying, and Yu]UE
Chen J, Duan R, Ji Z, Ying M, Yu J (2008) Existence of universal entangler. J
Math Phys 49(1):012103, 10.1063/1.2829895
[Datta and Shaji(2011)]DattaShaji
Datta A, Shaji A (2011) Quantum Discord and Quantum Computing - An Appraisal
Int. J. Quant. Info. 9:1787 10.1142/S0219749911008416
[Dür et al(2000)Dür, Vidal, and Cirac]PhysRevA.62.062314
Dür W, Vidal G, Cirac JI (2000) Three qubits can be entangled in two
inequivalent ways. Phys Rev A 62:062,314, 10.1103/PhysRevA.62.062314
[Ferraro et al(2010)Ferraro, Aolita, Cavalcanti, Cucchietti, and
Acín]classicalvolume
Ferraro A, Aolita L, Cavalcanti D, Cucchietti FM, Acín A (2010) Almost all
quantum states have nonclassical correlations. Phys Rev A 81:052,318,
10.1103/PhysRevA.81.052318
[Gühne and Lütkenhaus(2007)]witnesses
Gühne O, Lütkenhaus N (2007) Nonlinear entanglement witnesses, covariance
matrices and the geometry of separable states. J Phys: Conf Ser
67(1):012,004, 10.1088/1742-6596/67/1/012004
[Groisman et al(2007)Groisman, Kenigsberg, and Mor]Groisman2007
Groisman B, Kenigsberg D, Mor T (2007) “Quantumness” versus
“classicality” of quantum states. arXiv:quant-ph/0703103
[Gurvits(2003)]Gurvits:2003:CDC:780542.780545
Gurvits L (2003) Classical deterministic complexity of edmonds' problem and
quantum entanglement. In: Proceedings of the Thirty-fifth Annual ACM
Symposium on Theory of Computing, ACM, New York, NY, USA, STOC '03, pp
10–19, 10.1145/780542.780545
[Gurvits and Barnum (2002)]GB
Gurvits L, Barnum H (2002) Largest separable balls around the maximally mixed bipartite quantum state
Phys. Rev. A 66:062311 – 10.1103/PhysRevA.66.062311
[Henderson and Vedral(2001)]HV
Henderson L, Vedral V (2001) Classical, quantum and total correlations. J Phys
A: Math Gen 34(35):6899, 10.1088/0305-4470/34/35/315
[Horodecki et al(1997)Horodecki, Horodecki, and
Horodecki]HorodeckiPRL97
Horodecki M, Horodecki P, Horodecki R (1997) Inseparable two spin-
1/2 density matrices can be distilled to a singlet form. Phys Rev
Lett 78:574–577, 10.1103/PhysRevLett.78.574
[Horodecki(1997)]Horodecki1997333
Horodecki P (1997) Separability criterion and inseparable mixed states with
positive partial transposition. Phys Lett A 232(5):333 – 339,
10.1016/S0375-9601(97)00416-7
[Horodecki et al(2009)Horodecki, Horodecki, Horodecki, and
Horodecki]EntanglementRMP
Horodecki R, Horodecki P, Horodecki M, Horodecki K (2009) Quantum entanglement.
Rev Mod Phys 81:865–942, 10.1103/RevModPhys.81.865
[Kraus et al(2002)Kraus, Lewenstein, and Cirac]PhysRevA.65.042327
Kraus B, Lewenstein M, Cirac JI (2002) Characterization of distillable and
activatable states using entanglement witnesses. Phys Rev A 65:042,327,
10.1103/PhysRevA.65.042327
[Modi et al(2010)Modi, Paterek, Son, Vedral, and Williamson]unified
Modi K, Paterek T, Son W, Vedral V, Williamson M (2010) Unified view of quantum
and classical correlations. Phys Rev Lett 104:080,501,
10.1103/PhysRevLett.104.080501
[Modi et al(2012)Modi, Brodutch, Cable, Paterek, and
Vedral]DiscordRMP
Modi K, Brodutch A, Cable H, Paterek T, Vedral V (2012) The classical-quantum
boundary for correlations: Discord and related measures. Rev Mod Phys
84:1655–1707, 10.1103/RevModPhys.84.1655
[Ollivier and Zurek(2001)]OZ
Ollivier H, Zurek WH (2001) Quantum discord: A measure of the quantumness of
correlations. Phys Rev Lett 88:017,901, 10.1103/PhysRevLett.88.017901
[Peres(1996)]Peres96
Peres A (1996) Separability criterion for density matrices. Phys Rev Lett
77:1413–1415, 10.1103/PhysRevLett.77.1413
[Vidal and Tarrach(1999)]vidal
Vidal G, Tarrach R (1999) Robustness of entanglement. Phys Rev A 59:141–155,
10.1103/PhysRevA.59.141
[Werner(1989)]Werner
Werner RF (1989) Quantum states with einstein-podolsky-rosen correlations
admitting a hidden-variable model. Phys Rev A 40:4277–4281,
10.1103/PhysRevA.40.4277
§ APPENDIX
§ THE PERES ENTANGLEMENT CRITERION
Here are a few relevant remarks using the notations of the main article.
§.§ Transpose and partial transpose
Given a Hilbert space ℋ and a basis {|i⟩} (we always assume finite dimensional systems), the transpose is defined by linearity on basis operators |i_1⟩⟨i_2| by T(|i_1⟩⟨i_2|) = |i_2⟩⟨i_1|.
It follows that for any linear operator L,
⟨i_1|T(L)|i_2⟩ = ⟨i_2| L |i_1⟩.
If ρ is a state and ρ = ∑_i λ_i |ϕ_i⟩⟨ϕ_i|, one can check that
T(ρ) = ∑_i λ_i |ϕ_i⟩⟨ϕ_i| where
|ϕ⟩ = ∑_i a_i |i⟩ if |ϕ⟩ = ∑_i a_i|i⟩, a_i
being the complex conjugate of a_i. It follows that T(ρ) is also a state, with same eigenvalues as ρ.
Given a compound system described by ℋ_A⊗ℋ_B, the partial transpose with respect to the A
system is simply the operator i.e.
(|i_1⟩⟨i_2|⊗|j_1⟩⟨j_2|) = |i_2⟩⟨i_1|⊗|j_1⟩⟨j_2|
on basis elements. It also follows that for any operator L on ℋ_A⊗ℋ_B
⟨i_1j_1|(L)|i_2j_2⟩ = ⟨i_2j_1| L |i_1j_2⟩
§.§ The Peres Criterion
A state ρ of a bipartite system ℋ_A⊗ℋ_B is said to be separable if it
can be written in the form
ρ = ∑_i p_i ρ^A_i ⊗ρ^B_i p_i ≥ 0, ∑_i p_i = 1
where the ρ^A_i (resp. ρ^B_i) are states of ℋ_A (resp. ℋ_B);
if ρ is not separable, it is
said to be entangled.
If ρ is given by (<ref>), then
(ρ) = ∑_i p_i T(ρ^A_i)⊗ρ^B_i
and since the T(ρ^A_i) are states, this implies that (ρ) is itself a state (and separable).
This implies in turn that (ρ) must be positive semi-definite. As a consequence, if (ρ) is
not positive semi-definite, then ρ is not separable, i.e. it is entangled.
That is the statement of the Peres Criterion of entanglement <cit.>.
§.§ Checking for positivity
An operator P is positive semi-definite if it is Hermitian and if for all pure states |ϕ⟩,
⟨ϕ| P|ϕ⟩≥ 0 (iff P has no negative eigenvalue).
For any state ρ of ℋ_A⊗ℋ_B, (ρ) is always
Hermitian.
To prove that it is not positive semi-definite, we need only find a |Ψ⟩ such that
⟨Ψ|(ρ)|Ψ⟩ < 0. The partial transpose however depends on the basis chosen for ℋ_A.
We now show (using our notations) that whether (ρ) is positive semi-definite or not does not depend on the choice of that basis.
Indeed, let |e_i⟩ be any orthonormal basis of ℋ_A. Then ρ can always be written (in a unique way)
as ρ = ∑_ij|e_i⟩⟨e_j|⊗ρ_ij where the ρ_ij are operators of ℋ_B.
Let T_e be the transpose operator in the basis e i.e. T_e(|e_i⟩⟨e_j|) = |e_j⟩⟨e_i|.
Then
(T_e⊗ I)(ρ) = ∑_ijT_e(|e_i⟩⟨e_j|)⊗ρ_ij = ∑_ij|e_j⟩⟨e_i|⊗ρ_ij
(ρ) = ∑_ij T(|e_i⟩⟨e_j|)⊗ρ_ij = ∑_ij|e_j⟩⟨e_i|⊗ρ_ij
The |e_i⟩ also form an orthonormal basis of ℋ_A.
Now let |Ψ_e⟩ = ∑_i |e_i⟩|ψ_i⟩ be any pure state of ℋ_A⊗ℋ_B.
Then |Ψ⟩ = ∑_i |e_i⟩|ψ_i⟩ is also a pure state and
⟨Ψ|(ρ)|Ψ⟩ = ⟨Ψ_e|(T_e⊗ I)(ρ)|Ψ_e⟩ = ∑_ij⟨ψ_j|ρ_ij|ψ_i⟩
§.§ Proof of Lemma <ref>
Let us assume P is positive semidefinite:
P=∑_i λ_i |ϕ_i⟩⟨ϕ_i| with λ_i≥ 0.
If ⟨ϕ|P|ϕ⟩ = 0, then ∑_i λ_i |⟨ϕ|ϕ_i⟩|^2 = 0 and
λ_i⟨ϕ|ϕ_i⟩ =0 for all i
and thus ⟨ϕ|P|ψ⟩ = ∑_i λ_i ⟨ϕ|ϕ_i⟩⟨ϕ_i|ψ⟩ = 0 for
all |ψ⟩.
§ DISTILLABILITY
Note that the Peres Criterion is not a characterization. If the partial transpose of ρ is positive semi-definite,
ρ may still be entangled.
Furthermore, if a state ρ_ppt-ent
is entangled and
admits a positive partial transpose then it is not distillable
(namely, one canot distill a singlet state out of many copies of
ρ_𝑝𝑝𝑡-𝑒𝑛𝑡 via local operations and classical communication).
Such states are said to have “bound entanglement”.
A characterization of distillable states can be found in <cit.>.
Here is the lemma as we use it, as stated in <cit.>.
A state ρ of ℋ_A⊗ℋ_B is distillable if and only if there exists a positive integer N and
a state |Ψ⟩ = |e_1f_1⟩ + |e_2f_2⟩ such that
⟨Ψ| (ρ^⊗ N) |Ψ⟩ < 0,
where {e_1,e_2} (resp. {f_1,f_2}) are two unnormalized orthogonal vectors of ℋ_A^⊗ N
(resp. ℋ_B^⊗ N).
§ PROOF OF PROPOSITION <REF> USING MATRICES
When the states |ij⟩ are put in lexicographic order, the partial transpose corresponds to transposing blocks in the block matrix, whereas (I⊗T) corresponds to transposing each of the blocks individually.
The matrix of Proposition <ref> is a 3× 3 block matrix, with 3× 3 blocks.
We first calculate for both ρ_0 and ρ_1 the entries (11,11) and (01,10) (row 01, column 10 of their matrix). Those are ⟨11|ρ_0|11⟩=0, and ⟨01|ρ_0|10⟩=0 for ρ_0 and
⟨11|ρ_1|11⟩=0 and ⟨10|ρ_1|10⟩=1/2 for ρ_1. Those values were obtained in the main text.
The matrices for ρ_0 and ρ_1 are then the following (useless entries being kept blank).
cccccccccc 00 01 02 10 11 12 20 21 22
l@ (ccc|ccc|ccc)00
01 0
02
2-1010
11 0
12
2-1020
21
22
cccccccccc 00 01 02 10 11 12 20 21 22
l@ (ccc|ccc|ccc)00
01 1/2
02
2-1010
11 0
12
2-1020
21
22
Then, the 3× 3 block matrix is transposed, giving respectively for (ρ_0) and (ρ_1)
the matrices:
cccccccccc 00 01 02 10 11 12 20 21 22
l@ (ccc|ccc|ccc)00
01
02
2-1010
11 0 0
12
2-1020
21
22
cccccccccc 00 01 02 10 11 12 20 21 22
l@ (ccc|ccc|ccc)00
01
02
2-1010
11 1/2 0
12
2-1020
21
22
The matrix of (ρ_ϵ) = (1-ϵ)(ρ_0) + ϵ(ρ_1) is then
cccccccccc 00 01 02 10 11 12 20 21 22
l@ (ccc|ccc|ccc)00
01
02
2-1010
11 ϵ/2 0
12
2-1020
21
22
We see clearly that the matrix of (ρ_ϵ) has a 0 diagonal entry for which there is a non zero
entry on the corresponding row (or corresponding column). That implies that the matrix is not positive
semi-definite and consequently that ρ_ϵ is entangled.
Of course, the blank values in the density operator for ρ_1 could take any value without affecting the result;
in fact any density operator ρ_1 such that ⟨11|ρ_1|11⟩ = 0 and ⟨01|ρ_1|10⟩≠ 0
could have been used instead to give entangled states that arbitrarily close to ρ_0.
|
http://arxiv.org/abs/1701.08035v1 | 20170127124349 | Distance biases in the estimation of the physical properties of Hi-GAL compact sources-I. Clump properties and the identification of high-mass star forming candidates | [
"Adriano Baldeschi",
"Davide Elia",
"Sergio Molinari",
"Stefano Pezzuto",
"Eugenio Schisano",
"Marco Gatti",
"Andrea Serra",
"Milena Benedettini",
"Anna Maria Di Giorgio",
"John Scige Liu",
"Manuel Merello"
] | astro-ph.GA | [
"astro-ph.GA"
] |
Level crossings induced by a longitudinal coupling in the transverse field Ising chain
Frédéric Mila
December 30, 2023
======================================================================================
The degradation of spatial resolution in star-forming regions observed at large distances (d≳1 kpc)
with Herschel,
can lead to estimates of the physical parameters
of the detected compact sources
(clumps) which do not necessarily mirror the properties of the original population
of cores. This paper aims at quantifying the
bias introduced in the estimation of these parameters by the distance effect.
To do so, we consider Herschel maps of nearby star-forming
regions taken from the Herschel-Gould-Belt survey, and simulate the effect of increased distance to understand
what amount of information is lost when a distant
star-forming region is observed with Herschel resolution.
In the maps displaced to different distances
we extract compact sources, and
we derive their physical parameters
as if they were original Hi-GAL maps of the extracted source samples.
In this way, we are able to discuss how the main physical properties
change with distance.
In particular, we discuss the ability of clumps to form massive stars:
we estimate the fraction of distant sources that are classified as high-mass stars-forming objects due to their position
in the mass vs radius diagram, that are only “false positives”.
We give also a threshold for high-mass star-formation M>1282 (r/ [pc])^1.42 M_⊙.
In conclusion, this paper provides the astronomer dealing with Herschel
maps of distant star-forming regions with a set of prescriptions
to partially recover the character of the core population in
unresolved clumps.
ISM: clouds, stars: formation, infrared: ISM, methods: statistical.
§ INTRODUCTION
The impact of massive stars in the Milky Way is predominant with respect to low-mass stars: they
produce most of the heavy elements and energize the interstellar medium (ISM) through the emission
of ultraviolet photons. Therefore, understanding massive star formation is one of the most important
goals of modern Astrophysics <cit.>.
The Herschel infrared Galactic Plane Survey <cit.>, based on photometric observations in five bands between
70 and 500 μm, was designed to study the early phases of star formation across the Galactic plane,
with particular interest in the high-mass regime <cit.>.
Star forming regions observed in Hi-GAL span a wide range of heliocentric distances
<cit.>, therefore the physical size of the detected
compact sources (i.e. not resolved or poorly resolved) could correspond to quite different types of structures,
depending on the combination of their angular size and distance.
The smallest and densest structures in the ISM, considered as the last product of cloud fragmentation, then
progenitors of single stars or multiple systems, are called dense cores <cit.>,
while larger unresolved overdensities in Giant Molecular Clouds which host these cores are called clumps (0.2 pc≲ D ≲ 3 pc).
In few cases, very distant unresolved Hi-GAL sources may have a diameter D>3 pc <cit.>, even fulfilling
the definition of cloud. Correspondingly, other distance-dependent source properties, as mass and luminosity,
are found to span a wide range of values, from typical conditions of a core to those of an entire cloud.
Unfortunately, for distant sources Herschel is not able to resolve the internal structure and the contained
population of
cores <cit.>, therefore only global and/or averaged parameters can be quoted to describe the physical conditions
of the source and the characteristics of possible star formation ongoing in its interior.
Observations at higher resolution would be needed (for example
by means of sub-mm interferometry) to fully resolve any individual clump identifying all its single components,
but this would require very large observing programs with different
facilities, to cover the entire Galactic plane.
To overcome empirically the lack of spatial resolution, we pursued a completely different approach, namely
to consider Herschel maps of nearby star forming regions and degrade their spatial resolution to simulate the view of the same region
if located at a larger heliocentric distance. We implemented this idea using some nearby (d < 0.5 kpc) molecular
clouds observed in the Herschel Gould Belt survey <cit.>, where the compact sources correspond
to dense cores.
Obviously, these maps do not represent the “reality”, in turn being those regions located at a heliocentric distance of some hundred pc.
However they constitute the closest, then best resolved, available view of star-forming regions on which to base our analysis.
“Moving away” these regions to farther distances, we aim to understand not only how a region
would appear if it were placed at a distance farther than the actual, but mainly at linking the physical properties
of the compact sources detected in the new maps with those of the underlying source populations present in the
original maps. In this way we can evaluate the degree of information lost as a function of distance, in other words the distance
bias affecting the estimation of Hi-GAL compact source physical properties. To reach this goal, we probed a set of
different simulated distances for each considered region, and at each distance we
repeated the typical procedures applied to Hi-GAL maps for extracting the compact sources <cit.>, treating the
simulated maps as a completely new data set, with no reminiscence either to the original map or to those “moved”
at other distances.
The first paper is organized as follows:
in section <ref> we present the regions of the Gould Belt survey that we use in this paper,
in section <ref> we describe the procedure of “moving away” the regions, and in
section <ref> we report how the detection and the photometry of the sources in all the produced
maps has been carried out.
In section <ref> and <ref> we describe how the number of detected sources and
the fraction starless-to-protostellar changes with distance.
In section <ref> we present the distribution of the size of the detected objects as a function of distance.
Section <ref> shows a procedure to associate the sources detected at different distances.
Section <ref> describe describe uncertainties in the average temperature for protostellar and prestellar
objects due to distance effect
Section <ref> describes the distance bias in the mass vs radius relation.
A summary of main conclusions is reported in
section <ref>.
In a second paper we will discuss the effects of the distance bias
on the the luminosity vs mass diagram and on the derived star-formation rate.
§ OBSERVATIONS AND METHODOLOGY
§.§ Observations and data reduction
The observations used in this paper were taken from the Herschel <cit.>
Gould Belt survey <cit.> for the study of nearby star forming regions. A full
description of the HGBS is given by <cit.>.
For this paper we concentrated on a few regions among those observed in the HGBS, namely: Orion A, Perseus, Serpens and Lupus III
and IV.
We selected these regions both because they are close, so that we can reasonably assume
that most of the cores we detect are not blended, and because they cover a range of different core masses:
Orion A is a high-mass star forming region, Perseus is an intermediate-to-low mass star-forming region, while
Lupus and Serpens are forming low-mass stars. Other similar Herschel programs, such as HOBYS <cit.> and Hi-GAL itself,
observed farther regions, for which confusion is an issue: for example, in the Hi-GAL clump catalogue of the
inner Galaxy <cit.> only 166 sources
out of 36644 having a distance estimate are located at d<500 pc.
Furthermore, each HGBS region has the advantage
of being self-consistent in distance, which is a basic requirement for our kind of analysis. Instead the aforementioned nearby Hi-GAL sources
are generally found to be mixed,
in the same maps, with sources belonging to other distance components, which would represent an irreparable contamination of our data set.
On the other hand the disadvantage of using HGBS data consists of a limited statistics due to limited map size
at the furthest simulated distances.
HGBS observations were taken at 60/s in parallel mode with the two cameras PACS <cit.> and SPIRE <cit.>: the observed wavelengths were 70 μm and 160 μm for PACS, and 250 μm, 350 μm and 500 μm for SPIRE.
Maps were generated using the Unimap <cit.> map-maker for both instruments.
The area considered for our work is that common to the PACS and SPIRE fields of view.
We assumed the following distances to the selected regions: 150 pc and 200 pc <cit.> for Lupus IV and III,
respectively; 235 pc <cit.>
for Perseus;
230 pc for Serpens <cit.>,
415 pc for Orion A <cit.>.
The distance of the Serpens molecular cloud has been a matter of controversy: some authors place the Serpens molecular cloud
at approximately 400 pc.
In appendix <ref> we discuss, briefly, how the physical properties of the Serpens molecular cloud change if we assume a distance of
436 pc <cit.>.
These regions will be the subject of dedicated papers to be published by the HGBS consortium: in this respect we stress
that in this paper we are not interested in deriving the physical properties of the sources at the nominal
distances, but we want to derive how the intrinsic properties
of the sources change with the distance. For this reason we do not provide any catalogue of cores.
§.§ The simulation of increased distance
The methodology adopted in this paper simulates the view, through Herschel, of a region of the Gould-Belt as if it were placed at
a distance larger than the actual one. Of course,
this procedure implies a loss of spatial resolution and detail since the angular size of the moved maps (MM) decreases.
The pipeline adopted to obtain a MM is the following:
* Rescaling/rebinning the original map.
* Convolving the new rescaled map with the PSF of the instrument at the given wavelength.
* Adding white Gaussian noise to the map.
In detail,
(i) A structure of spatial size L placed at distance d_0 subtends an angle φ_0=L/d_0,
while if the same object is moved to a distance d_1>d_0 its angular dimension
becomes φ_1=L/d_1<φ_0. Therefore φ_1=φ_0 d_0/d_1,
so an image rebinned by a factor d_0/d_1 mimics the movement of the region from d_0 to d_1.
(ii) To reproduce more realistically the effect of a region observed with Herschel, the rescaled map must be
re-convolved with the PSF of the instrument. However, one must take into account the fact that
the original map already results from a convolution of the sky with the PSF, i.e. a kernel which can be approximated with a Gaussian of width,
θ_beam, which is equal to 8.4, 13.5, 18.2, 24.9, 36.3 arcsec at 70, 160, 250, 350, 500 μm, respectively
<cit.>. So the width of the kernel we use to
re-convolve the map is:
θ_conv=√(θ_beam^2(1-(d_0/d_1)^2)).
(iii) The noise in the maps can be well modeled as a combination of correlated noise
<cit.>
and white noise.
The map rescaling, of course, reduces white noise with respect to the original map by a factor √(d_0/d_1).
The sample standard deviation of the noise in the original and rescaled map are
s_N and s_N(√(d_0/d_1)), respectively, then to restore the noise level of the original map one has to
add a white noise image to the rebinned map.
In this noise image, each pixel is the realization of a Gaussian process with 0 mean and a standard deviation of s_N√(1-d_0/d_1)
(to
keep in all the simulated maps the same white noise level of the original one).
The s_N was estimated in a box of the original map where no sources and quite low diffuse emission are found, and therefore where the signal
is essentially due to statistical fluctuation.
This procedure is applied to every map at each band.
We decide to “move” the maps of each region for each band to the following virtual distances: 0.75, 1, 1.5, 2, 3, 5 and 7 kpc.
Fig. <ref> shows the original and MMs of the Perseus nebula at 250 μm while in the
Appendix <ref> the maps of the remaining regions are shown.
Fig. <ref> displays how the MM of Perseus lose detail at increasing distance, and correspondingly,
how sources resolved at the original distance become unresolved at large distances.
§.§ Source extraction and catalogue compilation
The detection and photometry of compact sources on the original and moved maps is carried out with CuTEx
<cit.>.
This algorithm detects the sources as local maxima in the second derivative images and then fits
an elliptical Gaussian to the source brightness profile to estimate the integrated flux.
The main output parameters of the fit are: the peak position, the minimum and maximum FWHM (ϕ_min, ϕ_max)
of the fitting ellipse, the peak flux and the integrated flux.
Since in this paper we intend to treat the moved region at each simulated distance as an independent data set, to be analysed according
to typical procedure applied to Hi-GAL fields <cit.>, we run CuTEx on all the maps at each simulated distance for each region.
Depending on the brightness level of the region, we adopted a CuTEx set-up more suitable for the inner Galaxy
<cit.>
or for the outer Galaxy (Merello et al., in prep).
Regions in the outer Galaxy have in general a lower median flux per map with respect to the inner part,
due to much lower background emission.
Moreover the detection of sources in the outer Galaxy is more sensitive to pixel-to-pixel noise.
For maps in the outer Galaxy, the CuTEx set-up also includes the smoothing of the PACS image and a lower detection threshold for SPIRE and PACS
160 μ m.
We therefore made a check on our set of HGBS maps to ascertain which ones require a CuTEx set-up similar to the Hi-GAL one for the outer Galaxy.
Orion A and Perseus have a larger median flux (∼70 MJy/sr) respect to the others.
Furhermore Serpens, Lupus III and IV have
a median absolute deviation comparable (∼11 MJy/sr) with the outer Galaxy regions such as the 2^∘× 2^∘ field centred at
l∼160^∘,
showing the lack of an extended background.
Therefore we used the inner Galaxy set up of CuTEx for Orion A and Perseus while we used the outer Galaxy set up for Serpens, Lupus III and Lupus IV.
The fluxes measured with CuTEx are then corrected with the procedure
described in <cit.> to take into account for the fact that
the instrumental PSF is not a Gaussian, while CuTEx performs a Gaussian fit.
After the source detection and flux measurement in all five Herschel bands,
we select the good compact source candidates (band-merging) by applying the procedure described in
<cit.> as well as <cit.>.
The band merging makes it possible to build the Spectral Energy Distribution (SED) of the sources.
A SED eligible for a grey body fit must satisfy the following criteria:
1) at least three consecutive fluxes between 160 and 500 μm,
2) showing no dips (negative second derivative),
3) not peaking at 500 μm.
An additional issue in our case is that the absolute astrometry of the MMs has no physical meaning because the rescaling of
the image shrinks their size.
In the band merging procedure we take
care of this issue, to ensure that the coordinates of the detected sources, at different wavelengths, are consistent with each other.
We address this issue as follows:
the function that we used to execute the band merging works with Equatorial coordinates.
For this reason we had to perform the band merging considering the pixel coordinates and the angular extent of
the objects in the MMs, then rescaled them to the
original
map and then finally convert them
into the equatorial system. With these quantities
it is finally possible to perform the band merging.
This procedure is repeated for all maps at each wavelength.
Once the SEDs are built, it is possible to perform a modified black body (hereafter grey-body) fit <cit.>
described by the equation:
F_ν=M/d^2k_0 (ν/ν_0)^β B_ν(T),
where F_ν is the flux at frequency ν; M is the mass of the source located at the distance
d, k_0 is the opacity at the frequency ν_0. We adopt
k_0 = 0.1 cm^2 g^-1 at ν_0= 1000 GHz <cit.>;
B_ν(T) is the Planck function
at the temperature T. We fixed β to 2 as in <cit.>.
The flux at 70 μm, where present, is not considered for the fit since it is
mostly due to the protostellar content of a clump, rather than its large-scale envelope emitting as a grey-body <cit.>.
In this way we obtain estimates of temperature and mass for each source in both the original and moved maps.
§ DISCUSSION OF DISTANCE BIAS
§.§ Number of sources as a function of distance
The amount of objects detected by CuTEx in Herschel maps is expected to decrease whith increasing simulated distance
(for each band).
In this section we analyse this effect from a quantitative point of view.
Notice that, since in this section we study each band separately,
not all the sources discussed here constitute a regular SED as those considered in the following sections.
The decrease of the number of objects with distance (see Fig. <ref>)
is due to two main combining effects.
The first one is related to the decrease of the flux with distance (F_λ∝ d^-2):
if the flux goes below the sensitivity
limit at a given band the source is not detected any more.
The second one is due to blending of sources that are close to each other in the original map and hence are not resolved any more
at larger
distances. The blending effect may also prevent losing some sources (since the flux is an additive quantity) that would be undetected
when
their flux would be below the sensitivity limit.
In principle since the flux decreases quadratically with distance and
since in a log-log plot, data shows a remarkable linear correlation, we fit a power law to the data:
N_s(d)∝ d^-δ,
where N_s is the number of sources at distance d.
The best-fitting power-laws with the
corresponding values of the δ exponent are shown in Fig. <ref> and reported in
Table <ref>. This exponent is smaller for the 70 μm and 160 μm bands
and is found generally to be between 1 and 1.9.
If we assume that the decreasing of sources is due to the decrease
of the flux with distance, and
assuming a power-law relation between the number of detected sources and the flux (N_s∝ F_λ^-η)
it turns out that N_s(d) ∝ d^2η-2. To get the value of η,
we fit the distribution of the fluxes, for the different regions at 70 μ m.
We find that 2η-2 is equal to 1.6, 1.0 and 1.0 for Orion A, Serpens and Perseus respectively.
There sample is too small for the Lupus III and IV.
We did this analysis only at 70 μ m because in the other bands the effects of the blending are more prominent.
These values are in good agreement with the values of δ in table <ref>, considering that we make the simple assumption that
the distribution of the fluxes is a power-law.
Knowing the slope δ can turn out to be very interesting for practical purposes, as shown by the following example:
suppose we detect N objects at a certain wavelength in a Hi-GAL region located at a distance d > 1 kpc; it is possible
to obtain an estimate of the number of “real”
sources (at a specific band) that one would expect to observe if the cloud was located at a distance d_0<d,
using the formula:
N_0=N (d/d_0)^δ.
A critical point is represented by the choice of d_0, which can lead to quite different values of N_0.
We suggest using values in the range 200 pc <d_0< 400 pc
since this corresponds to compact source scales typical of cores.
For example, let us consider a star-forming region located at d=5000 pc, whose Hi-GAL map at 250 μ m contains 20 sources:
how many sources we would see if the region was located at d_0=300 pc?
Using equation <ref> we can give a rough estimate of this number assuming δ=1.6 (see Table <ref>)
namely N_0=1802.
Applying instead equation <ref> to a larger d_0 (e.g. d_0>1000 pc) would imply to deal
with clumps also at the original distance (i.e. with still unresolved structures).
Certainly Equation <ref> requires to confirmation by independent observational evidence,
such as interferometric measurements aimed at exploring the
real degree of fragmentation of clumps.
§.§ Starless and protostellar fraction vs distance
After assembling the SEDs the sources are classified as follows: if a 70 μm counterpart is present,
the source is classified as
protostellar candidate <cit.>, hereafter protostellar, otherwise it is considered a
starless object. The starless sources can be further subdivided into bound (objects that can form stars and which are usually called prestellar)
or unbound (transient objects that will not form stars) whether their mass is larger or smaller, respectively,
than the mass given by the relation of <cit.>
M_Lars=460(r/ [pc])^1.9M_⊙.
We define the starless and protostellar fraction as n_sl/n and n_pro/n, respectively,
where n_sl and n_pro
are the number of starless and of protostellar sources respectively and
n=n_sl+n_pro
is the total number of sources.
In Fig. <ref> we show n_sl/n and n_pro/n while in Fig. <ref> we show
n_sl and
n_pro as a function of distance, respectively.
The general trend for n_pro/n in Orion A, Perseus and Serpens is to increase with distance until it reaches a plateau.
A different behaviour is found in Lupus III, where n_pro/n decreases and in Lupus IV where n_pro= 0 to all moved distances
due to the complete lack of detected protostellar sources.
The increase of n_pro/n with distance is explained with the fact that a source detected at a larger distance is an unresolved object
that probably contains multiple cores (protostellar but also starless).
Suppose there were an unresolved source, classified as protostellar, detected at a simulated distance d_1, and actually containing
a certain amount of sources
at the original distance d_0, and suppose that one of them is a protostellar and the remaining ones are starless:
in that case one would assign a protostellar character to a source which actually contains also a certain number of prestellar cores.
This simple example suggest to us that in principle n_pro/n is expected to increase with distance, as long as
n_pro/n reaches a plateau where the number of detected objects is very low due to of the sensitivity effect.
The decrease of n_pro/n with distance found for Lupus III and IV may, instead be due to a weak emission at 70 μm
at the original distance, which leads to a decrease in the number
of protostellar at the
larger virtual distances (Fig. <ref>).
<cit.> found that the behaviour of n_pro/n as a function
of the heliocentric distance, in the Hi-GAL catalogue <cit.>, is in agreement with that we derived from
Fig. <ref>. Indeed they found that n_pro/n grows up to 2 kpc and then n_pro/n reaches a plateau.
In their Fig. 4 they found that n_pro/n ≲ 0.2 up to 2 kpc but the order of 0.2 for larger distances.
This significantly supports the result we obtained here.
§.§ Size distribution vs distance
A common feature of molecular clouds is the hierarchical structure they show, containing bright agglomerates called clumps, that
in turn are formed
by smaller condensations called dense cores (see section <ref>).
In Fig. <ref> the distribution of the radii (at 250 μ m) of the
detected sources is shown, for the Perseus and Orion A maps, at different distances.
We do not show the same for the Lupus III, Lupus IV and Serpens due to poor statistics.
As one can see from these figures, the physical radius increases on average, and hence the fraction of cores in the overall population
of detected compact sources
decreases with distance.
As it emerges from this figure, up to d=1000 pc most of the detected sources are classified as cores, while at larger
distances they are generally classified as clumps.
Of course the
threshold r≲ 0.1 pc is not so strict and the transition between core and clump definition is not so sharp.
Furthermore, a difference is found between the prestellar
and protostellar source distribution:
the former is characterized on average by larger radii than the latter, as found, e.g., by <cit.>.
In Fig <ref>
the behaviour of the average radius
for each of the two populations is also reported (top right corner): that of prestellar
objects <R_pre> is larger than <R_pro> for
protostellar sources, but this gap appears to get smaller at
larger distances.
If we define q such that <R_pro> = q <R_pre>, q is found to be
0.68, 0.77, 0.88, 0.89 and 1.08 for the Perseus region at 235, 750, 1000, 1500 and 2000 pc, respectively, while q is
0.82, 0.92, 0.89, 0.89 and 0.95 for the Orion A region at 415, 750, 1000, 1500 and 2000 pc, respectively.
§.§ Association of the “moved” sources with the original ones
In section <ref> we have seen that for d≳1.5 kpc the source sizes are such
that the objects are
classified as clumps. For further analysis it becomes
important to identify and count the cores present in the original map that are contained in clumps found in the MM.
This information can be used, for example, to estimate which
fraction of mass of such clumps comes
from the contained cores, and what comes from the diffuse “inter-core” material.
To associate the sources detected at a given distance d with the original population of
objects,
we projected the ellipse, found at 250 μ m at distance d, back to the original distance d_0, and consider the sources falling
within such ellipse:
an example is shown in Fig. <ref>.
At this point it is possible to associate the sources of the moved map and those of the original one in two ways:
1) by doing band-specific associations, including those without regular SEDs.
For example, suppose to consider an object, detected at 250 μm at 5 kpc and
then count the sources that, in the map corresponding to the original distance and to the same band, lie inside the area occupied by this object.
2)Association using only sources with regular SEDs.
As an example, let us suppose there is a clump with a regular SED at 5 kpc and then count the number of cores
with a regular SED
that are contained in the clump, in the original map.
This association between a clump and the contained cores is very useful because one can decompose an unresolved object (clump)
into its smaller components (cores).
Therefore, from a practical point of view, this corresponds to observing an unresolved object with a higher resolution and hence
revealing its internal structure.
The former approach will be used in section <ref> where we discuss the contribution of the diffuse emission separately
for the various different wavelengths, while the latter will be used both in the following and in section <ref>,
where we will discuss the relation between
the physical properties of the moved sources and the original core population.
Fig. <ref> shows the average number of cores <N>, at the original distance,
that are merged into one single source
with a regular SED, at the moved distances.
As one can see from this figure, <N > increases slowly, with distance, up to 1500 pc and then tends to increase faster.
We fit the data with a power law <N> ∝ d^ζ, starting from 1500 pc and we find values for ζ
between 1 and 1.5 for the different regions.
The slower increase of <N > at smaller distances
is simply due to the fact that the sizes of the
objects in the MMs are more similar to the sizes of the original cores.
Moreover, the values of ζ are always smaller than 2, that is the expected value if the distribution of cores in
the maps was uniform.
Since, on the other hand, the actual core distribution is far from being uniform, but rather it is clustered, it is reasonable
to find values of ζ<2.
The values of <N > depend both on the original average surface density Σ (measured as the number of sources per
pc^2)
of the core population, and on the
minimum spatial scale present in the original maps.
The latter effect explains, for example,
why the values of <N > in the Orion A are smaller than in the other regions: the spatial
detail corresponding to the Herschel resolution
in Orion A is coarser than in Lupus, Serpens and Perseus due to the larger distance.
The former effect, is instead, related to Σ which is 93.0, 56.4, 50.4, 39.7, 129.8 pc^-2 for
Orion A, Perseus, Lupus III, Lupus IV and Serpens, respectively.
Considering two regions with comparable original distance, namely Serpens and Perseus, this effect can be appreciated, since
Σ is larger in Serpens and, correspondingly, larger <N> are systematically
found for Serpens than for Perseus.
§.§.§ Diffuse emission
A first comparison between the properties of the sources detected in the MM and those of the
sources at the original distance that are within the rescaled object can be carried out looking at the total fluxes.
We perform such analysis separately
at each wavelength,
therefore we also include detections that do not contribute to build regular SEDs.
Therefore we associate sources through the method 1) described in section <ref>.
Let F_λ(d) the flux of a source detected in a MM at a wavelength λ and at distance d rescaled to the original map,
and F_λ_*d=∑_i=1^n f_λ_i the sum of the flux of the sources that are within the moved source at the original distance.
Fig. <ref> shows F_λ(d) vs F_λ_*d for different regions at each wavelength:
it appears that, as expected, the contribution of the diffuse emission F_λ(d)-F_λ_*d increases
with distance.
This is not surprising since the objects in the map are patchily distributed (see Fig.<ref>) and since the size of
the sources increases with d (see section <ref>).
As the radius becomes larger, the contribution of the diffuse inter-core emission, which is expected to be more
uniformly distributed, increases quite regularly with the increasing area of the source (in addition to this,
an increasing amount of background emission is included in such flux estimate, as shown at the end of this section
through Fig. <ref>).
On the contrary, the contribution of emission from cores increases with distance depending on the degree
of clustering.
Figure. <ref> shows the average fraction of the diffuse emission
ϵ_λ(d)= <( F_λ(d)-F_λ_*d)/F_λ(d)> vs distance.
As one can see from this figure, the contribution of the diffuse emission at 70 μ m is lower than at larger wavelengths
for all the considered regions; this is due to the fact that the 70 μm emission is typically
associated with protostellar activity
<cit.> and is therefore more concentrated in compact structures
than the emission in the other bands,
which appears instead arranged in a diffuse network of cold filaments.
Furthermore, ϵ_λ(d) increases with distance up to a certain point, which depends on the region, after which
it tends to reach a plateau.
In conclusion this analysis suggests that the contribution of the diffuse emission for large distances (d≳1 kpc), i.e. a typical
case for Hi-GAL sources,
goes from 50% up to 95% (depending on the region)
except for the 70 μ m band where it goes from 50% up to 80% (see Fig. <ref>).
These large values of ϵ_λ(d) suggest that most of the clump emission is due
to the diffuse inter-core emission when the clump is located far away (d≥ 1 kpc). This has to be taken into account
to distinguish the whole clump mass from the fraction of it contained in denser substructures, possibly involved in star formation processes.
Such behaviour can not be simply explained with the increasing physical size of the source, but it must be take
into account how the background level
estimate changes with distance.
Indeed we expect that the background level for the sources detected in the original map is higher than that generally found
in the moved maps, since the background, in the original map, is due to the inter-core emission, while in the MM
it is the weaker cirrus emission on which the entire clump lies.
Since CuTEx derives the flux of a source by fitting a 2-D Gaussian with the formula F_λ = 4.53 a b F_p,
where a and b are the semi-axes at half maximum of the elliptical Gaussian and F_p is the peak flux measured
in MJy/sr,
with increasing distance, a and b are constrained to have physical plausible size as in general is done with the extraction tool,
so the physical area we use to scale F_λ increases quadratically, on average.
Moreover the background level is generally expected to be estimated lower and lower (see above), producing an increase of F_p.
To show that,
we computed the mean value of the background and of F_p for all the regions merged together
as a function of distance: in
Fig. <ref>
the background is
found on average to become smaller with d, and
correspondingly F_p becomes larger.
Therefore, the derived values of ϵ_λ(d) may be explained roughly with the following considerations:
the value of ϵ_λ(d) for one source can be modeled by the formula
ϵ_λ(d)=1-∑_i=1^N a_i b_i F_p_i/a b F_p
where a_i, b_i, F_p_i are the parameters of the objects in the original map that are contained within a
larger object at the moved distance d having a, b, F_p parameters in turn.
The product ab is proportional to the square of the distance and we roughly assume that the peak flux of the sources at the original distance
is the same for all sources. The peak flux
F_p is weakly dependent on distance (Fig. <ref>): on one hand, the dramatic drop of the background emission
at increasing distance should be expected to produce correspondingly,
by subtraction, a strong increase of F_p. However this is not observed mostly due to the fact that the
distance increase implies the averaging of boxes of pixels implied by the map rebinning we impose to
simulate the distance effect.
In particular we find that F_p at the largest
probed distance is at most
1.5 times the average peak flux for smaller distances, therefore adopting this value we get
that ϵ_λ(d)=1-(d_0/d)^2 N(d)/1.5 where N is the number of contained sources
within the moved source.
For example, if we consider the Perseus region observed at 250 μm at 5000 pc assuming N(d)=22 (as in Fig. <ref>),
we get
ϵ_λ(d)=0.97, which is in good agreement, despite the naivety of the model, with
the result of Fig. <ref>,
namely ϵ_λ(d)=0.9.
§.§.§ Physical properties of the “moved” clump and those of the original core population
In this section we consider the mass of those sources of the MM for which we performed the grey-body fit,
to understand how these quantities for a moved
source mirror those of
the original core population rather than those of the diffuse material.
We define with M_d the mass of the
moved detected sources (protostellar and prestellar) at distance d, and with M_*_d=∑_i=1^n m_i_d
the sum of the masses of all
the original sources (protostellar and starless)
that lie inside
the source when it is reported in the original map.
Figure <ref> shows the average values of M_d vs M_*_d at distance d:
we find that <M_d> is always larger than <M_*_d>.
A similar conclusion can be obtained if one uses the median statistics, instead of the mean: the median mass of the clumps
at large distances is by far larger than the
sum of the masses of the contained cores, as weel.
We want to know the effect of distance on the determination of the core formation efficiency.
The average
core formation efficiency, that we define as <CFE> =<M_*_d/M_d>,
can be derived from the values plotted in Fig. <ref> (panel a).
In Fig. <ref> we show <CFE> as a function of distance for each of the considered regions.
It can be seen that
the <CFE> changes from region to region
and tends to decrease
with d for distances below 1500 pc while,
at large distances, it becomes independent of d.
The decrease of the <CFE> with d below 1500 pc is due to the fact that the size of the clumps
and the contained cores are quite similar and hence the fraction of diffuse material remains low. In fact, this effect is particularly prominent
in Orion A, because it is located at 415 pc while we do not see this effect in the Lupus IV which is located at 150 pc.
The values of the <CFE> at large distances are between 1% and 20%, depending on the region.
§.§ Mean temperature vs Distance
Let us consider now the effects of the distance on the mean temperature of the sources
detected in each region at all virtual distances, comparing it with the temperature of the
original population of cores that fall within each clump.
We fit the average temperature with a power-law
<T>_d∝ d^ξ .
The uncertainties of <T_d> are given by S_T_d/√(n), where S_T_d is the sample
standard deviation of the temperature at distance d,
and n is the sample size.
Fig. <ref> shows <T_d> for each d together with the best power-law fit and the values of ξ.
We find that
<T>_d increases slowly with distance for the prestellar objects while it decreases
slowly for the protostellar ones.
The values of ξ for the prestellar objects (see Fig. <ref>, lower panel) are 0.06, 0.03, 0.2, 0.04, 0.05 for
Perseus, Orion A, Lupus III, Lupus IV and Serpens respectively. These slopes are very shallow but are positive anyway.
An opposite trend (i.e. temperature weakly decreasing with distance) is found for the protostellar objects: the values of ξ
are -0.1, -0.02 and -0.1 for Perseus, Orion A, and Serpens
respectively. The statistics are not sufficient at large distance to perform the fit for the Lupus III and IV regions.
It appears unlikely that both of those behaviours are due by chance, since we systematically find these trends
in all the considered regions.
We are quite confident that the decreasing behaviour of temperature for the protostellar sources and the increasing one for the
prestellar ones is due to the effect of confusion and can be possibly explained by taking into account two main factors likely to be concomitant
in most cases.
The first is related to the fact that the protostellar objects are typically warmer than the prestellar ones <cit.>.
A protostellar clump detected at a large distance is probably an unresolved object containing in turn some smaller objects
that can be starless or protostellar. If inside the unresolved protostellar clump there are some prestellar ones
this can decrease
the global average temperature.
In the lower panel of Fig. <ref>
we show the prestellar contamination of sources, namely the average number of prestellar cores contained in protostellar clumps (for
all the regions merged together).
The prestellar contamination, as expected, is close to 0 up to 1000 pc and then starts to get larger, in particular between 5000
and 7000 pc the average number of original prestellar cores contained in the protostellar moved clumps
becomes the same as the original protostellar objects.
On the other hand, if we take an unresolved prestellar clump at large distance this can contain some protostellar cores, whose flux at
70 μm goes below the sensitivity threshold (and hence not detected),
that can contribute anyway to increase the average temperature.
Indeed, although the signature of the presence of a protostellar component would remain undetected, in such case the effect
of this component on the remaining wavelengths (λ>160 μ m) would remain still observable as an unnaturally high temperature for a
prestellar source.
In the upper panel of Fig. <ref> we show the average protostellar contamination in
the prestellar clumps:
the protostellar contamination is close to 0 at low distances while it becomes larger at increasing distances.
The second effect responsible for the temperature decrease for the protostellar sources and increase
for the prestellar ones is related to the presence of the diffuse inter-core material, which is known for being typically
warmer than the prestellar sources and colder
than the protostellar ones <cit.>. This might
homogenize the temperature of the protostellar and prestellar as the physical radius of the clumps increases with distance.
§ MASS-RADIUS RELATION
The mass vs radius relation (MR) for cores/clumps is a useful tool for checking the conditions for massive star
formation (MSF).
Indeed several authors <cit.> use such diagram to identify
thresholds in column density supposedly required for the
formation of stars with M>10M_⊙.
Here we want to investigate how the predictive ability of the MR plot is affected by the distance effects.
In Figs. <ref> - <ref> the MR plot is shown for the different regions.
The MR diagram for sources found in the original map is reported in the panel a of each of these figures.
With open red and green circles we indicate bound and unbound starless objects, respectively,
while with open blue circles we indicate a protostellar
objects. The green dashed line represents the Larson's relation reported in equation <ref>, while the filled sky-blue and pink
area of the diagram correspond to the conditions compatible with
possible MSF, according to two different prescriptions respectively:
the first (which includes the second) corresponds to the criterion of
<cit.>, namely M(r) ≥ 870(r/ [pc])^1.33M_⊙,
which is an empirical limit for MSF based on observations of Infrared Dark Clouds; the latter corresponds to the criterion of
<cit.>, which is a more demanding threshold on column density of 1 g cm^-2 for MSF, based on
theoretical calculations.
This theoretical limit can be also expressed as M(r) ≥ 15042(r/ [pc])^2M_⊙.
We assume that these MSF thresholds break at masses lower than M=20 M_⊙, since the typical values of the core-to-star conversion efficiency are 0.5-0.33
<cit.> and hence it is not reasonable that a core with a mass lower than 20 M_⊙
can form a 10M_⊙ star.
In the panels from b) to i) of Fig. <ref>-<ref> the diagram is built
for the objects
detected in the MM of different regions for various distances, and compared with the case of the original map (panel a).
Fig. <ref>, in particular, displays the MR diagram for the Orion A region;
this nebula is known to be
a MSF molecular cloud <cit.>, and this is consistent with the fact that
several sources lie inside the MSF zone of the plot. If the Orion A map is moved to larger
distances (panels from b to i),
this region can still be considered a MSF nebula, based on this plot. This implies that apparently the intrinsic character of a MSF region
is still conserved if this region is observed at larger distances.
This remark is based on the case of a single region. However, from panel a) of Fig. <ref> one can deduce that
for another MSF region containing cores denser and more massive than those found in Orion A, clumps extracted in the MMs would likely continue
to populate even more so the KP region of the plot.
In Figs. <ref>-<ref>
the same analysis is repeated for the other regions. These regions are known for not being MSF, and this is consistent with the fact that
there are no sources
inside the blue and pink zone in the panel “a” of all of these figures. However,
at larger distances (panels b-i) sources are found inside the zone of the plot compatible with both MSF prescriptions,
and in particular in the KP zone;
therefore, clearly, the distance is biasing the
character one assigns to a region regarding its ability to form massive stars.
The Perseus region (Fig. <ref>) at the virtual distances (panels b-i) would be always classified as MSF according
the KP prescription. The Serpens nebula (Fig. <ref>) at the moved distances would be classified as MSF,
except at 1000 and 5000 pc.
The Lupus III and IV regions (Figs. <ref>-<ref>) are
characterized by a regime of somewhat lower masses compared with other regions <cit.>.
The Lupus III, at the virtual distances, would be classified as MSF at 750, 1000 and 2000 pc while it is not classified as MSF at 1500-3000 and 5000 pc.
The Lupus IV nebula is classified as MSF at 750, 1500 and 2000 pc while it would be classified as non-MSF at 1000 and 3000 pc.
At this point we want to
quantify how the fraction of supposed MSF objects varies with distance:
we define, at each distance, the fraction of prestellar and protostellar objects inside the MSF region (one of the two zones of the plot where MSF is possible
according to KP and KM prescriptions),
as f_pre=N_pre/N_PRE, f_pro=N_pro/N_PRO where
N_pre and N_pro are the number of prestellar and protostellar inside the MSF region while N_PRE and
N_PRO
are the total number of prestellar and protostellar.
Looking at Figs. <ref> - <ref>
it appears that f_pre, f_pro increase with distances for the KP zone, while they seem to decrease for
the KM one.
This statement can be made more quantitative through Fig. <ref>,
that shows f_pre and f_pro as a function of distance for the KP zone.
This figure highlights a clear trend of f_pre and f_pro to increase in the KP zone
up to ∼1000-2000 pc (depending on the region) and then to reach a plateau (Fig. <ref>).
The former of these trends is quite steep for both f_pre and f_pro.
The gap f_pre(d)-f_pre(d_0) (and similarly for f_pro) is found to be larger
than the largest error bar associated with these points,
indicating that the increase of this fraction
is statistically significant[Errorbars are estimated by assuming a counting statistics].
For example, for Orion A, f_pro=0.03 at d_0 and f_pro=0.2 at d= 1000 pc, while the largest uncertainty
among f_pro values at d=d_0 is 0.06.
As for the plateau behaviour
for the two regions with enough statistics, namely Orion A and Perseus, the average value of these fractions is ∼ 37% and
∼ 60% for the prestellar sources, respectively and ∼ 22% and
∼ 60% for the protostellar ones, respectively.
We do not show the trend of f_pre and f_pro for the KM zone because of the poor statistics.
With respect to the KP criterion, the ratios f_pre and f_pro are found to increase for distances up to 1000 pc.
This is mostly due to the sharp break of the KP relation we impose at M<20 M_⊙ (corresponding to r_break= 0.06 pc):
indeed, cores in the original map at the smallest probed distances typically have radius smaller than r_break.
At the virtual distance of 1000 pc, all detected sources have, instead, r>r_break. Therefore some of them, classified as
not-MSF in the original map, can be more easly found inside the KP area of the diagram (see
Figs. <ref> - <ref>).
This turns out in the behaviour of Fig. <ref> (first two panels starting from the top), where a plateau in the
f_pre and f_pro vs d relation is reached at d=1000 pc.
To understand the effects of distance on the MR relation we fit the data
for prestellar and protostellar objects, with a power-law M∝ r^α in the following way:
for each distance d we calculate the average values of the mass <M>_d and radius <r>_d of the
sources in the sample and
the corresponding standard error.
The fit is shown in Fig. <ref> with the corresponding α exponent evaluated separately
for prestellar and protostellar
objects. They range mostly between the exponent of the KP relation, 1.33, and the one
of the KM prescription, 2.0;
this means that if one decides to use the KM criterion
for inferring conditions for MSF, one probably loses sources with increasing distance, while on the contrary one tends to get some false positives if the
KP criterion is adopted.
To examine in more detail how the properties of a far source mirror those of the contained core population, now let us consider
the relation between the clumps detected at the largest probed distance and the “contained” sources at all smaller distances,
adopting the procedure described in Section <ref>.
Let O_d_max a source detected at the largest distance d_max and O_d the sources at distance
d<d_max that
fall within O_d_max.
Figs. <ref>-<ref> contain a MR diagram for each
O_d_max
and the corresponding O_d.
In detail, in Fig. <ref> we display four O_d_max cases found in the Perseus nebula:
three of them are classified as MSF according to KP criterion
(panels b, c, and d) and one of them as low-mass star
forming (panel a). The corresponding O_d are classified in some cases as MSF, while in other cases as low-mass star forming;
in particular, looking at the sources detected at the original distance, none of them is found to be MSF.
Fig. <ref> contains the cases found in Orion A.
Again, at the moved distances, many sources are classified as MSF according to the KP criterion.
In panels b and d, also some O_d found at the original distance are classified as MSF.
This suggests that the character of an intrinsically MSF clump (or of an entire MSF region as Orion A)
is preserved under the moving procedure we apply, while
for a low-mass star forming region like Perseus a spurious classification as MSF can be introduced as an effect of distance bias.
This can be observed also in the behaviour of O_d in the other low-mass star forming regions we considered, namely
Lupus III, IV and Serpens (Figs. <ref>, <ref> and <ref>):
also in these cases some O_d at intermediate distances are found to lie in the KP zone of the
MR diagram, although they do not contain sources in the original map with this property.
In Figs. <ref> and <ref> we notice an apparently unnatural behaviour:
a O_d source has a mass larger than that of the corresponding O_d_max containing it.
Many factors can contribute to such cases occurring: multiplicities in source detection which appear/disappear at different distances
and intrinsic fluctuations in CuTEx flux extraction can change the shape of the SEDs of a source observed at two different distances.
A shift of the SED peak results in a different temperature estimate, and in turn in a change of the mass which can be very
relevant at very small temperatures, a shift of few K may lead to an order of magnitude change in mass <cit.>.
In conclusion the analysis of the MR diagram, carried out by means of
Figs. <ref>-<ref> suggests that:
1) the region that is recognized as MSF at the nominal distance (Orion A) is still classified as MSF at each of the moved distances;
2) the regions that are not recognized as MSF at the original distance (Perseus, Serpens, Lupus III and IV) are classified as MSF
for most of the virtual distances;
3) the fraction of objects fulfilling the KP relation increases for each region up to 1000 pc and then reaches
a plateau;
4) the mean value of the clump mass at a certain distance is generally related to the mean value of the radius with
a power law with an index larger than 1.33 (KP criterion)
and smaller than 2 (KM criterion) for all regions.
Therefore we find a trend to introduce MSF objects, on average, if one adopts the KP prescription and to lose MSF objects if one chooses
the KM one.
For these reasons it is important to estimate the fraction of misclassified objects according to the KP prescription.
To increase statistics, all regions are merged together in this analysis.
We define as false positive (FP) the objects, detected at the moved distance, that are classified as MSF although they do not
contain MSF cores at the original distance; with true negative (TN) the objects that are not classified as MSF
at the moved distance and do not have MSF association
with the objects at the original distance. The true positive (TP) are objects that are classified as MSF at the moved distances and have associations
with MSF objects at the true distance. Finally the false negative (FN) are the objects that are not classified as MSF at the moved distances and are
associated with MSF at the original distance.
Given these definitions, our attention has to be focused on the fraction of FP with respect to the
population from which they originate (and to which they should ideally belong), namely that of TN.
Therefore we define
f_kauff=n_FP/(n_FP+n_TN), where n_FP is the number FP at distance d
and n_TN is the number of TN at distance d;
the f_kauff fraction as a function of distance gives an indication of the rate of misclassification of the objects introduced by distance.
As one can see in Fig.<ref>, f_kauff increases between 750 and 1000 pc, remains almost constant between 1000
and 3000 pc, and then shows fluctuations above 3000 pc.
Such fluctuations are due to the lack of statistics starting from that distance (indeed the error bars, estimates from the
Poisson statistics, are very large). The gap
between 750 and 1000 pc is simply due to the fact that many sources found at 750 pc lie at r<r_break (see above).
From 1000 up to 3000 pc f_kauff is of the order of 20% while at larger distance it is larger but the error bars
are very large in turn.
Therefore we can reasonably say that above 1000 pc
f_kauff slightly increases with distance
although the trend is hard to quantify above 3000 pc.
We can
plot some lines, that are parallel to the KP relation (see Fig. <ref>), whose analytical form can be expressed as
M_k>k (r/ [pc])^1.33 M_⊙ ,
and we can count the number of FP that lie above these lines.
Calling n_M_k the number of FP fulfilling equation <ref>, the relative fraction of FP
will be
p(k)=n_M_k/(n_FP+n_TN).
We have to make a distinction between the objects detected at distances lower and larger than 4000 pc respectively, since as we have seen in
Fig. <ref> for distance above 4000 pc f_kauff tends to to be larger.
A list of several values of p(k) is reported in Table <ref> for the two different cases (d<4000 and d>4000 pc).
From these values we can infer that the fraction of misclassified objects p(k) decreases with increasing k, hence for a clump,
classified as MSF, with an high
value of k indicates a lower probability of dealing with a FP and the other way around for a lower value of k.
The values reported in Table <ref> may be very useful to the Herschel astronomer.
Suppose, for example, we find an Hi-GAL clump having mass M_0 and radius r_0 and within the MSF zone, the coefficient k can be found as
k=M_0/R_0^1.33.
Looking at Table <ref> one can get the corresponding value of p(k) and
hence estimate how likely it is that the clump
is a FP.
In section <ref> we provide a practical example of the use of Table <ref> for a more
correct interpretation of Hi-GAL data.
§.§ A new prescription for high-mass star formation
As pointed out in section <ref>, the exponent of the <M>_d vs <r>_d power-law relation
is smaller than 2.0 (KM) and larger than 1.33 (KP), so that distance effects can bias the fraction of sources fulfilling the two
aforementioned criteria and the character itself of star formation in the considered region.
A new prescription for identifying compatibility with high-mass star formation is therefore needed.
The prescription we provide here is:
M>1282 (r/ [pc])^1.42 M_⊙
(see Appendix <ref> for the derivation of the way it was derived).
It is interesting to make a comparison of our formula with that of KP in terms of FP and TP above 1 kpc,
since about 99% of the Hi-GAL sources lie in that range
of distances.
Figure <ref> display the fraction of FP
as a function of distance for both thresholds.
The fraction of FP (Fig. <ref>) generated by the KP prescription is larger than ours at any distance; in particular
it is larger by few percent between 1000 and 3000 pc,
while it gets larger by 20-30% at larger distances.
The number of TP and FN is comparable for both prescriptions[ 24 TP and 10 FN for our prescription and 26
TP and 8 FN for the KP one].
Notice that equation <ref> keeps the fraction of FP almost constant, unlike the KP prescription.
The main differences between the two are found in the
fraction of FP at the largest probed distances namely 5000 and 7000 pc.
Notice also that using the KM prescription, one would get a smaller fraction of FP, but would also get a very low number
of TP and a higher number of FN (4 and 30, respectively).
Therefore, to minimize the occurrence of FP at all distances, we suggest to use the prescription presented here.
In particular this choice is critical at d ≥ 5000 pc, where the majority of Hi-GAL sources (73%) provided with a distance
estimate are found to lie <cit.>.
§ COMPARISON WITH PREVIOUS LITERATURE
The results obtained in this paper can be used to discuss some results found in literature on clump properties.
For example <cit.> discuss the MR diagram for the sources detected with the APEX Telescope Large Area Survey of the whole inner
Galactic plane at 870 μm (ATLASGAL).
These authors found that a relevant fraction (namely 92 %) of the clumps are potentially forming massive stars
because they fulfil the KP relation.
This fraction is notably larger than the one found for
the Hi-GAL sources <cit.>, namely 71 % and 65 % for the protostellar and prestellar, respectively.
This relevant discrepancy may not be only explained with the Hi-GAL better sensitivity, but most likely it is implicit in the KP relation itself.
We have seen, indeed, at large distances the KP relation overestimates the number
of high-mass star forming candidates due to a shallow slope of 1.33.
In the MR plot of <cit.> they found that for large values of the radius (∼ 1 pc),
and hence at large distances, all the sources fulfil the KP relation while for smaller values of the radius (∼ 0.1 pc)
this fraction is smaller than 1. This effect is likely to be due to the distance bias that we discuss in our paper.
Even in the Hi-GAL catalogue we observe the same effect at large distances in the MR plot, but it is less evident
because the sources are more scattered due to the
spread in temperature of the Hi-GAL sources.
The sources of <cit.> are less scattered in the MR plot, because the masses were derived using only two
fixed values of the temperature.
It is noteworthy that masses in ATLASGAL literature are also derived through a different dust opacity; this leads to rescale the value
of the coefficient
in the KP relation, to 580.
Therefore to properly rescale equation <ref> to make it comparable with ATLASGAL data we obtain:
M_A>855 (r/ [pc])^1.42 M_⊙.
We apply equation <ref> to the data of Table 3 of <cit.>
to get the fraction of high-mass star forming candidates.
We find that this fraction is 89% while if we use the
KP relation we get a 94% fraction[Notice that source radii quoted by <cit.> are not beam-deconvolved.
Using, instead, deconvolved radii would increase, in principle, both the fractions reported here.].
Note that <cit.> consider as a high mass star forming candidate a clump with a mass larger than 650 M_⊙
that fulfils the KP relation. By using such a more demanding prescription they found that 71% of the sources of the catalogue entries
are massive star forming candidates.
We can also make the same test on the data of <cit.> from the Bolocam Galactic
Plane Survey <cit.>, again using equation <ref>.
since they used the same dust opacity of the ATLASGAL collaboration.
We get 25% of the sources classified as potentially forming massive protostar against 36% found using the KP relation.
Similarly, we can also
apply the prescription of Table <ref> to the Hi-GAL data.
As already mentioned, the most comprehensive catalogue of compact objects, at present, is provided in <cit.>.
In that catalogue we find that, in the inner Galaxy, there are 62438 objects, with 35029
of them having a known distance and classified as prestellar and protostellar.
We find 23733 objects that fulfil the KP prescription.
This means that 68 % of the sources might be able to form a massive star according to the KP prescription.
One can apply now the values of Table <ref> to give an estimate of the fraction of FP. Clearly
for each source we get a different value of p(k) (equation <ref>); taking the average of the values of p(k) for all the sources
we get <p(k)>∼ 7 %.
Note however that the values of Table <ref> were derived for d< 7 kpc while in the <cit.>
catalogue the 63% of the objects are found to lie at
larger distances
Finally, we apply equation <ref> to the Hi-GAL catalogue to discriminate the fraction
of high-mass star forming candidates.
We find that according to our prescription the fraction of such objects is 52 % and 64 % for the prestellar
and protostellar clumps, respectively.
§ CONCLUSIONS
Distance bias increasingly affects estimates of the physical parameters (mass, temperature and radius) of far-infrared sources.
This is particularly critical for the Hi-GAL survey, which observed a large area of the sky underlying
a wide range of
heliocentric distances.
In this paper, using the information taken from nearby star forming
regions, we have shown how this bias influences the estimation of these quantities.
The main results of our work are:
* We present an original pipeline to virtually “move” the maps at larger distances.
* The number of sources detected with CuTEx decreases with distance,
for each band, as a power law with exponent between 1.1 and 1.9.
The smallest values for these exponent are found in the 70 μm band.
* The protostellar fraction n_pro/n increases with distance until it reaches a plateau (above 1 kpc).
* The effect of the confusion is to increase with distance the physical radius of the compact sources;
we show that the sources are classified, on average, as cores up to
1 kpc and as clumps at larger distances.
* The contribution of the diffuse (inter-core) emission to the flux of a source increases, with respect to the
original population, with distance. This is due to both the increasing physical area of the source, and to
the background which gets lower at larger distances. The smallest effect is systematically found at 70 μ m.
* We found that the average core formation efficiency, for distances above 1500 pc,
depends on the region and can go from few percent up to 20%.
* The average temperature derived from SED fits increases quite weakly with distance for the prestellar objects, whereas it decreases slowly for
the protostellar objects.
* In the mass vs radius diagram the fraction of sources classified as compatible with high-mass star formation increases with
distance if one considers the Kauffmann-Pillai (KP) prescription,
whereas it decreases for the Krumholz-McKee (KM) prescription. This happens because
the mean value of the mass is found to be related to the mean value of the radius with a power law with an exponent larger than 1.33 (KP criterion)
and smaller than 2 (KM criterion) for all the investigated regions.
Therefore, adopting the KM prescription to check the presence of MSF clumps one tends to lose genuine candidates with distance, while
adopting the KP criterion one tends to gain false positives at increasing distances.
* We show that the fraction of false positive (FP),
defined as the number of FP (clumps that are classified as MSF at a moved distance, according the
KP prescription,
but do not have associations
with MSF objects at the original distance) over the total number sources, is 13 ± 2 %, keeping almost constant
between 1000 and 4000 pc. Above this distance the fraction of “false” high-mass star forming clumps according to the KP criterion climbs
up to almost 40%, but has large associated uncertainties due to small statistics.
* We estimated how likely it is that
a clump classified as MSF is actually a false positive, as a function of its position with respect to the parametrized area
M_k>k (r/ [pc])^1.33 M_⊙
in the mass vs radius plot.
A dichotomy is found for distances shorter and larger of 4000 pc.
As an example, a probability of 10% is achieved for k=1096 for d<4000 pc, and at k=1969 for d>4000 pc.
* We derive a new prescription to discriminate possible MSF clumps:
M>1282.4 (r/ [pc])^1.422 M_⊙ that appears to produce a smaller amount of false positives than the
KP relation and preserved the same rates of TP.
* We applied our prescription to discriminate
high-mass star forming candidates in the Hi-GAL dataset: we found that the fraction of high-mass star
forming objects is 52% and 64%, for prestellar and protostellar sources, respectively.
* Taking into account a recently derived distance estimate for the Serpens region of 436 pc <cit.> instead of the one used
throughout this paper (d_0=230 pc), we notice (see appendix <ref>) that the mass vs radius relation for a few cores changes so that they get
classifiable as high mass star forming candidates.
This might lead, in turn, to classify this region as compatible with episodes of MSF.
§ ACKNOWLEDGMENTS
We are grateful to the referee for very useful comments
that helped to greatly improve the paper.
This research has made use of data from the Herschel Gould Belt survey project (http://gouldbelt-herschel.cea.fr).
The HGBS is a Herschel Key Project jointly carried out by SPIRE Specialist Astronomy Group 3 (SAG3), scientists of several institutes
in the PACS Consortium (CEA Saclay, INAF-IAPS Rome and INAF-Arcetri, KU Leuven, MPIA Heidelberg), and scientists of the Herschel Science Center (HSC).
AB, DE, SM, SP, ES, MB, AD, SL, MM's research activity is supported by the VIALACTEA Project, a
Collaborative Project under
Framework Programme 7 of the European Union funded un-der Contract 35607380,
that is hereby acknowledged.
§
PLOT OF THE PERSEUS REGION
In this Appendix we show the moved maps at 70, 160, 350, 500 μm for the Perseus region described in Section <ref> and
the maps for the other regions at 250 μm only.
§ DERIVATION OF A NEW HIGH-MASS STAR FORMING PRESCRIPTION
In this Appendix we show how the prescription, expressed by equation <ref>,
to discriminate high-mass star forming candidates is derived.
We consider all the objects (both prestellar and protostellar) detected in all the regions moved at
d>1000 pc.
We define a potentially high-mass star forming candidate O_hm (as defined in section <ref>) as
an object detected at a moved distance which is associated with, at least, a high-mass star forming core (according to the KP relation)
detected at the original distance.
Similarly, we define a low-mass star forming candidate O_lm as
an object detected at a moved distance which is associated with low-mass star
forming cores detected at the original distance.
We search for a prescription in the form M>A r^a. For a given (A,a) pair,
we define as a true positive (TP) a O_hm whose mass is larger than M, a false negative (FN) a O_hm
whose mass is smaller than M, a true negative (TN) a O_lm whose mass is less than M and a false positive (FP)
a O_lm whose mass is larger than M.
We search (A,a) pairs that maximize the ratio f=(N_TP+N_TN)/(N_FP+N_FN),
where N_TP, N_TN, N_FP, N_FN are the numbers of TP, TN, FP and FN, respectively.
To do that we build a grid of values of a and A and then take those maximizing f.
The grid was built exploring values of a between 1.33 and 2, in 30 steps, and values of A between 870 and 1800, in 200 steps.
Then we impose the further constraint that
the fraction of FP should be roughly constant with distance.
It turns out that the best values are A=1282 and a=1.42.
It is also possible to provide an uncertainty on the value of A:
we keep a fixed and we vary A as long as the total fraction of FP changes by 5%.
For A=1282 the fraction of FP is ∼ 9% while ∼4 % and ∼14 % are achieved
for A=2000 and A=940, respectively.
§ ANOTHER DETERMINATION OF THE DISTANCE OF THE SERPENS CLOUD
The distance of the Serpens molecular cloud has been a matter of debate.
In literature a broad spectrum of distances can be found; for example <cit.> and <cit.> estimate a distance of
225 and 230 pc, respectively, while recently <cit.> place the Serpens cloud at 436 pc.
Here we want to check how the MR diagram, that we have shown in section <ref>, is affected
if we adopt a distance of 436 pc instead of
230 pc
(Figure <ref>).
Since the radius and the mass of the sources scale linearly and quadratically with distance, respectively, to get the new values of r and M
we simply multiply these parameters by 436/230 and (436/230)^2, respectively.
Obviously, also the previously simulated distances must be multiplied by 436/230 leading to new simulated distances of
1422, 1896, 2843, 3791, 5687, 9478 and 13270 pc.
Note that the Serpens cloud, with this new d_0 might be considered as a potentially MSF region, since in this case
at the original distance
there are six sources fulfilling the KP relation.
Note that also at the moved distances this cloud remains classifiable as a MSF region.
Therefore the behaviour of this cloud in our analysis, if we assume d_0=436 pc, is very
similar to the Orion A molecular cloud which
is a genuine high-mass star forming region <cit.>.
It is also interesting to apply the same technique developed in this section to make a further virtual comparative check between
the two regions of our sample with the richest source statistics.
In practice we want
to understand how the properties of the Perseus cloud would change if it was located at the same distance
of Orion A (415 pc) instead of 235 pc.
We rescale the mass and the radius with the same procedure that we used above for Serpens data.
Figure <ref> displays the mass vs radius plot for the Perseus nebula assuming d_0=415 pc.
As one can see from Figs <ref> and <ref>
the radius of the sources at the original distance for Orion A and Perseus would be quite similar, while the masses in Orion A would be still
larger than in Perseus.
This suggest that, while the size distributions tend to be similar when reported at the same distance (this is somehow
trivial since compact sources span the same range of angular sizes), the behaviour of the corresponding masses seems to
be more related to the intrinsic nature of the specific region.
mnras
@urlcharsothermakeother $&#_%
@doi@urlcharsother ifnextchar [ @doi@
@doi@[]
@doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1
@eprint#1#2@eprint@#1:#2::nil
@eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1
@eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml
dblp:#1
@eprint@#1:#2:#3:#4niltempa #1tempb #2tempc
#3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined
mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc
[Alves, Lombardi & LadaAlves
et al.2007]Alves2007
Alves J., Lombardi M., Lada C. J., 2007, @doi [A&A]
10.1051/0004-6361:20066389, http://adsabs.harvard.edu/abs/2007A
[André et al.,André
et al.2010]Andre2010
André P., et al., 2010, @doi [A&A] 10.1051/0004-6361/201014666,
http://cdsads.u-strasbg.fr/abs/2010A
[Beckwith, Sargent, Chini &
GuestenBeckwith et al.1990]Beckwith90
Beckwith S. V. W., Sargent A. I., Chini R. S., Guesten R., 1990,
@doi [AJ] 10.1086/115385, http://adsabs.harvard.edu/abs/1990AJ.....99..924B 99, 924
[Beltrán et al.,Beltrán
et al.2013]Beltran2013
Beltrán M. T., et al., 2013, @doi [A&A]
10.1051/0004-6361/201321086, http://adsabs.harvard.edu/abs/2013A
[Benedettini et al.,Benedettini
et al.2015]Benedettini2015
Benedettini M., et al., 2015, @doi [MNRAS] 10.1093/mnras/stv1750, http://adsabs.harvard.edu/abs/2015MNRAS.453.2036B 453, 2036
[Bergin & TafallaBergin &
Tafalla2007]Bergin2007
Bergin E. A., Tafalla M., 2007, @doi [Annu. Rev. Astron. Astrophysics]
10.1146/annurev.astro.45.071206.100404, http://adsabs.harvard.edu/abs/2007ARA
[ComerónComerón2008]Comeron2008
Comerón F., 2008, The Lupus Clouds.
p. 295
[Dunham, Crapsi, Evans, Bourke,
Huard, Myers & KauffmannDunham et al.2008]Dunham2008
Dunham M. M., Crapsi A., Evans II N. J., Bourke T. L., Huard
T. L., Myers P. C., Kauffmann J., 2008, @doi []
10.1086/591085, http://adsabs.harvard.edu/abs/2008ApJS..179..249D
179, 249
[Eiroa, Djupvik & CasaliEiroa
et al.2008]Eiroa2008
Eiroa C., Djupvik A. A., Casali M. M., 2008, The Serpens Molecular
Cloud.
p. 693
[Elia & PezzutoElia &
Pezzuto2016]Elia2016B
Elia D., Pezzuto S., 2016, @doi [] 10.1093/mnras/stw1399,
http://adsabs.harvard.edu/abs/2016MNRAS.461.1328E 461, 1328
[Elia et al.,Elia
et al.2010]Elia2010
Elia D., et al., 2010, @doi [A&A] 10.1051/0004-6361/201014651, http://adsabs.harvard.edu/abs/2010A
[Elia et al.,Elia
et al.2013]Elia2013
Elia D., et al., 2013, @doi [ApJ] 10.1088/0004-637X/772/1/45, http://adsabs.harvard.edu/abs/2013ApJ...772...45E 772, 45
[Elia, Molinari, Schisano, Pestalozzi
& et alElia et al.2016]Elia2016
Elia D., Molinari S., Schisano E., Pestalozzi M., et al .,
2016, submitted to
[Ellsworth-Bowers et al.,Ellsworth-Bowers
et al.2015]Ellsworth2015
Ellsworth-Bowers T. P., et al., 2015, @doi []
10.1088/0004-637X/805/2/157, http://adsabs.harvard.edu/abs/2015ApJ...805..157E 805, 157
[Genzel & StutzkiGenzel &
Stutzki1989]Genzel1989
Genzel R., Stutzki J., 1989, @doi []
10.1146/annurev.aa.27.090189.000353, http://adsabs.harvard.edu/abs/1989ARA
[Giannini et al.,Giannini
et al.2012]Giannini2012
Giannini T., et al., 2012, @doi [A&A] 10.1051/0004-6361/201117811,
http://adsabs.harvard.edu/abs/2012A
[Ginsburg et al.,Ginsburg
et al.2013]Ginsburg2013
Ginsburg A., et al., 2013, @doi [] 10.1088/0067-0049/208/2/14,
http://adsabs.harvard.edu/abs/2013ApJS..208...14G 208, 14
[Griffin et al.,Griffin
et al.2010]Griffin2010
Griffin M. J., et al., 2010, @doi [A&A] 10.1051/0004-6361/201014519,
http://adsabs.harvard.edu/abs/2010A
[Hirota et al.,Hirota
et al.2008]Hirota2008
Hirota T., et al., 2008, @doi [] 10.1093/pasj/60.5.961, http://adsabs.harvard.edu/abs/2008PASJ...60..961H 60, 961
[Kauffmann & PillaiKauffmann &
Pillai2010]Kauffmann2010
Kauffmann J., Pillai T., 2010, @doi [ApJ]
10.1088/2041-8205/723/1/L7, http://adsabs.harvard.edu/abs/2010ApJ...723L...7K 723, L7
[Könyves et al.,Könyves
et al.2015]Konyves2015
Könyves V., et al., 2015, @doi [A&A] 10.1051/0004-6361/201525861,
http://adsabs.harvard.edu/abs/2015A
[Krumholz & McKeeKrumholz &
McKee2008]Krumholz2008
Krumholz M. R., McKee C. F., 2008, @doi [Nature]
10.1038/nature06620, http://adsabs.harvard.edu/abs/2008Natur.451.1082K 451, 1082
[LarsonLarson1981]Larson1981
Larson R. B., 1981, @doi [MNRAS] 10.1093/mnras/194.4.809, http://adsabs.harvard.edu/abs/1981MNRAS.194..809L 194, 809
[Menten, Reid, Forbrich &
BrunthalerMenten et al.2007]Menten2007
Menten K. M., Reid M. J., Forbrich J., Brunthaler A., 2007,
@doi [] 10.1051/0004-6361:20078247, http://adsabs.harvard.edu/abs/2007A
[Molinari et al.,Molinari
et al.2010]Molinari2010
Molinari S., et al., 2010, @doi [A&A] 10.1051/0004-6361/201014659,
http://adsabs.harvard.edu/abs/2010A
[Molinari, Schisano, Faustini,
Pestalozzi, di Giorgio & LiuMolinari et al.2011]Molinari2011
Molinari S., Schisano E., Faustini F., Pestalozzi M., di Giorgio
A. M., Liu S., 2011, @doi [A&A] 10.1051/0004-6361/201014752, http://adsabs.harvard.edu/abs/2011A
[Molinari et al.,Molinari
et al.2014]Molinari2014
Molinari S., et al., 2014, @doi [Protostars and Planets VI]
10.2458/azu_uapress_9780816531240-ch006, http://adsabs.harvard.edu/abs/2014prpl.conf..125M pp 125–148
[Molinari et al.,Molinari
et al.2016]Molinari2016
Molinari S., et al., 2016, @doi [] 10.1051/0004-6361/201526380,
http://adsabs.harvard.edu/abs/2016A
[Motte et al.,Motte
et al.2010]Motte2010
Motte F., et al., 2010, @doi [A&A] 10.1051/0004-6361/201014690, http://adsabs.harvard.edu/abs/2010A
[Olmi et al.,Olmi
et al.2013]Olmi2013
Olmi L., et al., 2013, @doi [A&A] 10.1051/0004-6361/201220409, http://adsabs.harvard.edu/abs/2013A
[Ortiz-León et al.,Ortiz-León
et al.2016]Leon2016
Ortiz-León G. N., et al., 2016, preprint, http://adsabs.harvard.edu/abs/2016arXiv161003128O (@eprint arXiv
1610.03128)
[Pezzuto et al.,Pezzuto
et al.2012]Pezzuto2012
Pezzuto S., et al., 2012, @doi [A&A] 10.1051/0004-6361/201219501,
http://adsabs.harvard.edu/abs/2012A
[Piazzo, Calzoletti, Faustini,
Pestalozzi, Pezzuto, Elia, di Giorgio & MolinariPiazzo
et al.2015]Piazzo2015
Piazzo L., Calzoletti L., Faustini F., Pestalozzi M., Pezzuto S.,
Elia D., di Giorgio A., Molinari S., 2015, @doi [MNRAS]
10.1093/mnras/stu2453, http://adsabs.harvard.edu/abs/2015MNRAS.447.1471P 447, 1471
[Pilbratt et al.,Pilbratt
et al.2010]Pilbratt2010
Pilbratt G. L., et al., 2010, @doi [A&A] 10.1051/0004-6361/201014759,
http://adsabs.harvard.edu/abs/2010A
[Poglitsch et al.,Poglitsch
et al.2010]Poglitsch2010
Poglitsch A., et al., 2010, @doi [A&A] 10.1051/0004-6361/201014535,
http://adsabs.harvard.edu/abs/2010A
[Polychroni et al.,Polychroni
et al.2013]Polychroni2013
Polychroni D., et al., 2013, @doi [ApJ] 10.1088/2041-8205/777/2/L33,
http://adsabs.harvard.edu/abs/2013ApJ...777L..33P 777, L33
[Ragan, Moore, Eden, Hoare, Elia &
MolinariRagan et al.2016]Ragan2016
Ragan S. E., Moore T. J. T., Eden D. J., Hoare M. G., Elia D.,
Molinari S., 2016, @doi [] 10.1093/mnras/stw1870, http://adsabs.harvard.edu/abs/2016MNRAS.462.3123R 462, 3123
[Russeil et al.,Russeil
et al.2011]Russeil2011
Russeil D., et al., 2011, @doi [A&A] 10.1051/0004-6361/201015852,
http://adsabs.harvard.edu/abs/2011A
[Straižys, Černis & BartašiūtėStraižys et al.2003]Straizys2003
Straižys V., Černis K., Bartašiūtė S., 2003,
@doi [] 10.1051/0004-6361:20030599, http://adsabs.harvard.edu/abs/2003A
[Tan, Beltrán, Caselli, Fontani,
Fuente, Krumholz, McKee & StolteTan et al.2014]Tan2014
Tan J. C., Beltrán M. T., Caselli P., Fontani F., Fuente A.,
Krumholz M. R., McKee C. F., Stolte A., 2014, @doi [Protostars
and Planets VI] 10.2458/azu_uapress_9780816531240-ch007, http://adsabs.harvard.edu/abs/2014prpl.conf..149T pp 149–172
[Veneziani et al.,Veneziani
et al.2013]Veneziani2013
Veneziani M., et al., 2013, @doi [A&A] 10.1051/0004-6361/201219570,
http://adsabs.harvard.edu/abs/2013A
[Wienen et al.,Wienen
et al.2015]Wienen2015
Wienen M., et al., 2015, @doi [] 10.1051/0004-6361/201424802, http://adsabs.harvard.edu/abs/2015A
|
http://arxiv.org/abs/1701.07829v1 | 20170126190002 | Galactic Doppelganger: The chemical similarity among field stars and among stars with a common birth origin | [
"M. Ness",
"H-W. Rix",
"David W. Hogg",
"A. R. Casey",
"J. Holtzman",
"M. Fouesneau",
"G. Zasowski",
"D. Geisler",
"M. Shetrone",
"D. Minniti",
"Peter M. Frinchaboy",
"Alexandre Roman-Lopes"
] | astro-ph.SR | [
"astro-ph.SR",
"astro-ph.GA"
] |
1Max-Planck-Institut für Astronomie, Königstuhl 17, D-69117 Heidelberg, Germany
2Center for Cosmology and Particle Physics, Department of Physics, New York University, 726 Broadway, New York, NY 10003
3Center for Data Science, New York University, 60 5th Avenue, New York, NY 10011, USA
4Flatiron Institute, Simons Foundation, 162 Fifth Ave, New York, NY 10010
5Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK
6Department of Astronomy, New Mexico State University, Las Cruces, NM 88003, USA
7Department of Physics & Astronomy, University of Utah, 115 S. 1400 E., Salt Lake City, UT 84112, USA
8Space Telescope Science Institute, 3700 San Martin Drive, Baltimore MD 21218, USA
9Departamento de Astronomia, Casilla 160-C,
Universidad de Concepcion, Chile
10University of Texas at Austin, McDonald Observatory, USA
11Instituto Milenio de Astrofisica, Santiago, Chile
12Departamento de Fisica, Facultad de Ciencias Exactas, Universidad Andres Bello,
Av. Fernandez Concha 700, Las Condes, Santiago, Chile
13Vatican Observatory, V00120 Vatican City State, Italy
14Department of Physics & Astronomy, Texas Christian
University,
TCU Box 298840, Fort Worth, TX 76129, USA
15Department of Physics & Astronomy - Universidad de La Serena - A. Juan Cisternas, 1200 North, La Serena, Chile
ness@mpia.de
We explore to which extent stars within Galactic disk open clusters resemble each other in the high-dimensional space of their photospheric element abundances, and contrast this with pairs of field stars. Our analysis is based on abundances for 20 elements, homogeneously derived from spectra (with carefully quantified uncertainties, with a median value of ∼ 0.03 dex). We consider 90 red giant stars in seven open clusters and find that most stars within a cluster have abundances in most elements that are indistinguishable (in a χ^2-sense) from those of the other members, as expected for stellar birth siblings. An analogous analysis among pairs of >1000 field stars shows that highly significant abundance differences in the 20-dimensional space can be established for the vast majority of these pairs, and that the -based abundance measurements have high discriminating power. However, pairs of field stars whose abundances are indistinguishable even at 0.03 dex precision exist: ∼ 0.3% of all field star pairs, and ∼ 1.0% of field star pairs at the same (solar) metallicity [Fe/H] = 0 ± 0.02. Most of these pairs are presumably not birth siblings from the same cluster, but rather doppelganger. Our analysis implies that “chemical tagging” in the strict sense, identifying birth siblings for typical disk stars through their abundance similarity alone, will not work with such data. However, our approach shows that abundances have extremely valuable information for probabilistic chemo-orbital modeling and combined with velocities, we have identified new cluster members from the field.
§ INTRODUCTION
There now exists an abundance of spectroscopic data for stars across the Milky Way, from which large numbers of element abundances have been, or can be, measured. These data are being delivered by current surveys including Gaia-ESO <cit.>, APOGEE <cit.>, RAVE <cit.> and GALAH <cit.> and there are numerous future spectroscopic surveys being planned including 4MOST <cit.>, MOONS <cit.> and WEAVE <cit.>. Large volumes of data, combined with new techniques to optimally exploit the data and deliver high precision stellar abundances, e.g., <cit.>, enable us to examine the distribution in abundance space of the stars in the Milky Way disk and halo. This information can then, in principle, be used to constrain the assembly history of our Galaxy. One way to pursue this is to identify stars of common birth sites via their abundance signatures, a process called chemical tagging <cit.>. To do this, we need to first know what the spread, if any, is in the abundance measurements of stars that are known to be born together.
Stars in open clusters are believed to be born from single star forming aggregates and therefore these stars should be chemically homogeneous <cit.>. The chemical similarity of stars born together has been used to identify members of moving groups or co-natal groups in the Milky Way including using abundances alone <cit.> as well as chemically exceptional groups of stars <cit.>. Many of the individual clusters have been studied in numerous studies, including using data <cit.> and the measured abundance differences within clusters has been demonstrated to be relatively small, on the order of measurement errors themselves <cit.>. Statistically there is a much greater abundance similarity among stars within a cluster, compared to stars from different clusters <cit.>. However, although the question of the magnitude of open cluster abundance dispersion has been investigated previously, there has been very little assessment as to the true intrinsic dispersion of clusters in their many elements <cit.>.
The abundance dispersion that we measure at face value in groups of stars that are born together in clusters (or potentially also in associations), is a combination of the intrinsic dispersion of the cluster and the measurement uncertainties. Quantifying this intrinsic dispersion for clusters is indispensable not only for the prospects of chemical tagging. This is also critical for understanding the chemical abundance distribution and dimensionality of the Milky Way disk and key for being able to determine when stars are likely to be not born together.
A high level of abundance homogeneity (to within ∼ 0.03 dex) has been demonstrated for three open clusters in ; M67, NGC6819 and NGC2410, from the stellar spectra directly <cit.>: after removing temperature trends, the cluster spectra form one-dimensional sequences. This novel data-driven approach was motivated by the difficulties in obtaining consistent high precision stellar abundance measurements and such is an approach is optimal to achieve high precision limits on the intrinsic cluster abundance dispersions, and so assess homogeneity of a single birth site itself. Only this data-driven approach, which did not provide actual abundance measurements, has managed to place tight constraints on the intrinsic cluster dispersion <cit.>. However, the limitation in such a method is that it does not return absolute abundance measurements and as such the open cluster measurements can not be directly compared with that of the field, which is our aim here.
With our modifications to the Cannon and carefully selected high-fidelity training set, we achieve high precision and report absolute abundance measurements for 90 open cluster stars. From these consistent, high-precision abundance measurements across ranges of temperature and gravity, we make an assessment of the multi-element abundance dispersion (or limits thereon), given our thorough characterization of our measurement uncertainties. Our results constitute the largest homogeneous and consistent analysis of the intrinsic abundance dispersion within open clusters.
Comparing these high precision abundance measurements in among stars in an
open cluster and among disk field stars, can shape our expectation for the
chemical similarity of stars that were born together – and those that were presumably not (but are still members of the Milky Way's dominant disk population). We should expect that – with abundance data of this quality – most intra-cluster pairs of stars look very similar or indistinguishable, while most pairs of field stars have discernible abundance differences. Importantly, these data can allow us to determine the fraction of field stars that are not related in birth origin, yet are as chemically similar as those born in the same molecular cloud. In their abundances, these pairs of stars look perfectly alike (given the 20 abundances), but are in most cases not birth siblings from the same cluster; we dub those stars doppelganger. The rate of doppelganger obviously plays a decisive role in the efficacy of chemical tagging.
In our analysis, there are two key ingredients for delivering our high precision measurements from APOGEE data: (i) We modified our spectral modeling, , to correct for spurious abundance variations across fiber number, caused by the varying Line Spread Function (LSF) across the APOGEE detectors; and in 's training step we used abundances that have been corrected for LSF variations. (ii) We selected a high fidelity training set using labels from 's stellar parameter and abundances pipeline ASPCAP <cit.>: 5000 high signal-to-noise stars that made it sensible to train on a total of 23 labels: three stellar parameters, , and , 19 [X/Fe] individual abundances from DR13 <cit.>, plus the mean Line Spread Function (LSF) of the star. In Section 2 we present our training set and in Section 3 we discuss our modifications to and training set in order to cope with the problem of mean abundance changing as a function of LSF. In Section 4 we show the precision we achieve for our abundances, from the cross-validation of the training set and report the abundances we obtain for the individual clusters. We use our uncertainties to determine the intrinsic spread in each element using our cluster model. In Section 5 we examine the intra-cluster and field pair abundance similarity distributions and from this: we demonstrate that the abundance similarity among pairs of stars within clusters, defined via a χ^2-distance in 20-element abundance space, is far greater than among random pairs of field stars. We also determine the doppelganger rate among Galactic disk stars (for data sets of this quality): we find this rate to be very small (∼ 1%), yet large enough to matter greatly for chemical tagging. In Section 6 we present as an aside new cluster members that we have identified from the field using our approach of examining stellar element abundance similarity between pairs. In Section 7 we discuss the implications of our results.
Our work makes the first quantification of the doppelganger rate, which directly tests the viability of strict chemical tagging. Our analysis also demonstrates the diagnostic power in the high dimensional abundance space: as part of our analysis, we identify new cluster members, by combining the chemical information using our pair similarity measure and radial velocity data. The high precision measurements that we derive are therefore important for the broader assessment of disk (and cluster) formation. In general, such high precision measurements as we derive, with their carefully characterized uncertainties, are absolutely necessary to make use of the large data sets to characterize the chemical diversity, distribution and, conditioned on expectations of the properties of open clusters, the chemical dimensionality and variability of the Milky Way disk.
§ APOGEE DATA
For our analysis we use two aspects of the APOGEE data: we used the spectra <cit.> and stellar parameter and abundances from the SDSS-IV public data release DR13 <cit.>. We performed our own signal-to-noise independent continuum normalization on the aspcapStar files, similarly to <cit.>, by fitting a second order polynomial to pixels identified with weak parameter dependencies. For all DR13 spectra we selected around 5 percent of the pixels using a criteria of 0.985 > flux > 1.025 and the spectral model coefficients (|θ_Teff|, |θ_logg|, |θ_[Fe/H]|) < (0.005,0.005,0.005). This normalization was applied consistently to the spectra in both the training and the test step of <cit.>.
§.§ Training Data
We constructed a training set of data (including their reference labels and their spectra) that consists of 5000 stars with SNR > 200 that span the chemical space of the stars of the Milky Way disk and broadly encompass the label space of the cluster members in the test step. For our reference labels we used the so called DR13-corrected labels with small additional corrections for the LSF dependence (see Section 3). We eliminated stars
with highly anomalous abundance measurements and the ASPCAP BAD flag checked.
Our reference labels span the following range in stellar parameters:
3650 < < 5760 K
0.45 < < 3.95 dex
–1.7 < < 0.36 dex
§.§ Test Data
Our test data are the spectra of 97 stars in seven open clusters, where our membership is taken from the cross over between those identified by <cit.> and those in the calibration file table [cal_dr13.fits available at sdss.org]. For NGC2420 we took an additional 3 stars to those identified in <cit.> that were studied by <cit.>. The properties of the seven open clusters are summarized in Table 1.
§ METHODOLOGY: MODIFICATIONS TO
Precise abundance measurements from high SNR spectra are almost inevitably limited by systematics. Systematic trends of abundance with temperature and surface gravity are dealt with in via a post-calibration <cit.>. The results from this calibration are called DR13-corrected labels. To determine this calibration, the dependence of the abundances on and is measured for a set of calibration stars, of open and globular cluster stars, plus asteroseismic targets (with very precise values derived from the asteroseismic parameters). A polynomial is fit to the and dependent trends and the results for all stars are adjusted by these fits. In the course of the present work,
we found a more subtle systematics signature: abundances correlate with the spectrograph's fiber number that the star is assigned to for observation. We attribute this foremost to variations of the LSF across the detector. Potentially, nonuniform characteristics of the electronics, such as the persistence, which affects only part of the (blue) chip from which the spectra are read out (fiber numbers ≤ 50) also contribute.
Using the DR13 data, we established that the measured abundances depend on LSF width, shown in Figure <ref>. This Figure shows the mean DR13 abundances of our training set of stars, as a function of the Full Width Half Maximum (FWHM) of the LSF for the stars, measured from the apStarLSF files provided by . For mapping the overall trends of abundances across the disk these trends are not significant and will not affect global measured abundance trends and inferences. For example, with respect to the abundance trends based on the measurements across (R,z), which are on average inclusive of all fiber numbers, independent of position on the sky (except for targeted co-natal groups like streams and open and globular clusters). However, these
effects may be extremely important when trying to make the most precise abundances comparison among pairs or groups of stars. Therefore, it is necessary to remove these trends before assessing abundance homogeneity between stars. Such effects may increase the abundance differences, but they may also make stars observed with the same fiber appear spuriously similar in their abundances (Section 5). If one takes the DR13 abundances unaltered, the abundance dispersion of clusters whose stars were observed mostly with low fiber numbers (which equates to the lowest FHWM of the LSF), e.g. NGC2158, NGC6791, M67, appear higher than those with high fiber number. This is particularly manifest for “problematic" elements such as Cu: the mean abundance of which changes very quickly with LSF or fiber number at the lowest FWHM of the LSF values, which is equivalent to the lowest fiber numbers (Figure <ref>).
To correct for the LSF variation we need to add two processing steps to ; the first affects the reference labels, the second 's spectral model.
First, we correct the systematic biases in the reference labels shown in Figure <ref> by fitting a 5^th order polynomial to the mean trend of each element and adjusting each element by this fit value (given the FWHM of the LSF of the spectrum under consideration). For most elements this correction is very small or negligible. But for elements like Cu it can be as high as 0.2 dex for the lowest LSF value (see Figure <ref>).
Second, we include the FWHM of the LSF as a label in , similarly to the stellar parameters and abundances and we include the FWHM of the LSF as a data point for our model for each star, which is treated in 's model in exactly the same way as the flux. We can consider the LSF's width as a data point, as we know mean LSF width from the apStarLSF files produced for each spectrum. To determine the LSF, we take the average value of the standard deviation of the Gaussian fit across every pixel of the apStarLSF array. This approach to handling LSF variations is an approximation to the ideal approach, which would be to treat the LSF width as deterministically known at both train and test time, and treat it as a convolution and deconvolution operation. That, however, would significantly complexify both the training and test steps, without manifest practical advantage.
§.§.§ Cross-Validation of this Revised Data Analysis
Given the slight modifications to and the particular reference set, we initiate the analysis with an assessment of the label
precision. We do this with a cross-validation test on the training set, similarly to <cit.>. We perform a set of ten leave-10%-out cross-validations, which are illustrated in Figure <ref>. The stars whose label estimates from are shown in this Figure had all been excluded from the training set. The model was constructed using the remaining 90% of the training set in each case. The x-axis shows the input training labels and the y-axis shows 's best fit labels. We are able to recover the labels to high precision: the stellar parameter precisions are ≲ 45 K in , ≲ 0.1 dex in and ≲ 0.02 dex in . For individual abundances, the precision is 0.02 to 0.12 dex, depending on the element.
Our model fit of to the test data is excellent in a χ^2-sense and we show a typical star, from the cluster NGC6791 in Figure <ref>. A 300Å span of the spectrum of this star (in black) and model from (in cyan) is shown in the top panel of the Figure. The pixel-by-pixel scatter term of is shown in the panel directly underneath, for the same 300Å spectral region. The small scatter quantifies that our model is a good fit to the training data <cit.>. Narrow wavelength regions (≈ 30 Angstroms) are also shown in Figure <ref>, to demonstrate the goodness of fit of the model around a number of individual element absorption features. Again, for our model and subsequent derivation of labels, we do not restrict the pixels that uses to deliver the abundance information; the cross-validation demonstrates that “learns" where the information about each element is derived from in the spectra, such that the input labels are successfully reproduced at test time.
§ RESULTS
We now present the results from these data in three steps.
First, for an analysis of abundance dispersions, or differences, not only precise
measurements matter, but also a good understanding of the uncertainties:
over- or underestimates of the individual abundance uncertainties would lead to
under- or overestimates of the cluster-intrinsic abundance dispersions,
or the abundance differences among pairs of stars. Second, we proceed to quantify the
abundance dispersions in each of the elements in each of our clusters.
Third, we show how to characterize the abundance similarities, or differences,
among pairs of stars; we then apply this to pairs of stars within a cluster
and within random stars from the field, which of course speaks immediately to the question of chemical tagging.
§.§ Revised APOGEE Abundances and Abundance Uncertainties for Members of the Open Clusters
Based on the goodness-of-fit χ^2 metric determined above for each star, we excluded seven stars from the cluster NGC6791, with a χ^2 > 22,000. For our analysis we do not exclude cluster stars with relatively low SNR measurements (SNR < 100), but use the SNR-dependent error to estimate the scaled precision at low SNR. We determine the SNR dependence of the precision using a calibration set of data available as part of the public release of DR13, which includes both co-added spectra and individual visit spectra for a set of ≈ 1000 open cluster, globular cluster and calibration stars. A sample of elements is shown in Figure <ref>, where each data point is determined by measuring the variance of the difference between the measured abundance from high SNR spectra (comprised of combined individual observations) and the individual spectra with low SNR, for each of the 1000 stars in the calibration set. The Figure shows the dependence across SNR of 10 to 200, where the precision flattens above SNR of about 150.
Our cluster stars have a reported SNR from 60 to 1000. We the adopt abundance uncertainties for each star that are the quadrature sum of the signal-to-noise scaled uncertainties estimated from the cross-validation and the formal errors returned by the 's test step optimizer.
Figures <ref> to <ref> show the cluster individual abundances with their corresponding uncertainties. The points are colored by to check for any systematic abundance trends. The typical 1-σ standard deviation measurements for the elements around their mean values range from 0.01 – 0.1. The cyan points in the background are the labels for all cluster
stars used in the training of . The DR13 input abundance labels that we adopt for training are already corrected for systematic variations with temperature <cit.>, except for C and N, which are known to have astrophysical variations along the giant branch (and then additionally corrected as part of our procedure for LSF variation as described in Section 4). For the open clusters, we note that the coolest stars do seem to have the highest measurements of [N/Fe], within a cluster.
We report the measured abundance mean and variance for each of our 90 open cluster stars in Table 1,
along with the corresponding results (where available) and 2MASS IDs. Our results compare very well to the results, but using , we obtain substantively higher precision and so can report measurement uncertainties that are 20% - 50% lower than , in most cases. Here we are concerned with precision and not really with accuracy; there is a discussion in <cit.> regarding the scale of the open clusters with respect to the literature. Overall, we note there is a large variation in the reported individual element measurements from high resolution spectroscopy (e.g. see Table 3 of <cit.> for a literature comparison of one of the open clusters, NGC2420).
lccccc
Open Cluster Properties
6
1
0pt
Cluster l b Distance (kpc) E(B-V) log (Age)
NGC7789 115.532 -5.385 2.34 0.22 9.235
NGC6819 73.978 8.481 2.36 0.24 9.174
NGC6791 69.959 10.904 4.10 0.12 9.643
NGC2420 198.107 19.634 3.09 0.03 9.048
NGC2158 186.634 1.781 5.07 0.36 9.023
NGC188 122.843 22.384 2.05 0.08 9.632
M67(NGC2682) 215.696 31.896 0.91 0.06 9.409
The cluster data are from the WEBDA Open Cluster Database www.univie.ac.at/webda/
§.§.§ Re-scaling some of the Abundance Uncertainties
The quadrature sum of the formal abundance uncertainty for each star and
the cross-validation error (typically 10× larger; see Figure <ref>)
represents the most conservative uncertainty estimate for our measurements,
essentially an upper limit on the precision; if these uncertainties were too large, the inferred intrinsic dispersion would be too low.
We check this using the best sampled cluster,
NGC6819, with 28 stars, assuming it is chemically homogeneous; the distribution of measured abundance estimates must not be “narrower" than the abundance uncertainties.
Figure <ref> shows the abundance distribution for all elements for NGC6819. The data are shown in the binned histograms and the cross-validation errors are shown in the red Gaussians. The median formal errors for the stars are shown in the narrow blue Gaussian distributions. Under the assumption that the cluster does not have any intrinsic dispersion in any element, the cross-validation errors appear to be a relatively accurate representation of the measurement uncertainties. However,
the uncertainties for some elements, for example, both [Na/Fe] and [V/Fe] look to be overestimated from this Figure, comparing the wide red Gaussian distributions to the narrower width of the binned data.
The χ^2 distribution of pairs of stars within a cluster asks how likely the abundance measurements of a pair of stars would be if the true abundances were identical. We can therefore use the χ^2 metric, for all pairs of stars within a cluster, to check whether our uncertainties are over- or under-estimates. The χ^2 value is given by the following equation:
χ_nn'^2 = [x_ni - x_n'i]^2/σ_ni^2 + σ_n'i^2,
where the indices n and n' denote the two stars, i the elements, and x_ni the measurements
with uncertainty σ_ni.
Assuming the true abundances of the cluster stars were identical, we expect a
distribution of χ_nn'^2 that has a mean of 1 and a median of 0.45
for every element. Figure <ref> shows the distribution of
the χ_nn'^2 values for all pairs of stars in our fiducial cluster, NGC6819, for each of our 20 elements. This Figure shows that the median and mean values of the distributions for a number of elements have a mean χ^2< 1 and a median χ^2< 0.45, implying that the measurement uncertainties adopted so far must be overestimates.
For those elements we decrease the uncertainties by the
factors that make the mean or median of χ^2 match the theoretical expectations.
§.§ Individual Abundance Dispersion within Clusters?
Determining whether or not open clusters have a measurable intrinsic abundance spread is critical for the pursuit of chemical tagging and understanding the formation of these systems <cit.>. With the carefully evaluated measurement uncertainties at hand, we are now in a position to determine these intrinsic spreads. At face value, the dispersion of abundance measurements among stars in a given cluster is comparable to the measurement uncertainties for the individual stars for each element. Qualitatively, this implies that the clusters are nearly homogeneous in their abundances.
However, we need to determine formally for each element and for each cluster, what the intrinsic dispersion is, given our data.
Calculating the “observed" abundance dispersion of a cluster can be done by simply fitting a Gaussian to the ensemble of abundance estimates <cit.>. But here, we need the
“intrinsic" dispersions, which requires us to adopt a simple Gaussian model for the intrinsic abundance distribution P(x_i) of the i-th element within a cluster, which
accounts explicitly for the measurement uncertainties:
P(x^o_i,n | x̅_̅i̅, σ_i, δ x_i,n) =
∏_n=1^N
1/√(2 π (δ x_i,n^2 + σ_i^2))·exp ( - (x̅^̅o̅_̅i̅,̅n̅ - x_i,n)^2/2(δ x_i,n^2 + σ_n^2) )
Here the mean abundance of each element is x̅_i and the intrinsic (error corrected) abundance dispersion of each element within a cluster is σ_i. Our observational information is {x^o_i,n,δ x_i,n}. Of course, the intrinsic dispersion must be positive definite.
Then we are in a position to calculate the pdf for the intrinsic dispersion in each cluster.
We also optimize the right hand side of equation (2) over a range of sigma-clipping values of the input data (x_i, n, δ x_i, n) and take the maximal value. The typical optimal sigma-clipping is σ_clip> 3. This means that only a few stars are excised by sigma clipping and for just a few of the elements. For clusters M67, NGC2420, NGC6791, NGC6819 and NGC7789, between 1-3 stars excised, for only a few elements; as few as 1 element (in NGC2420) and as many as only 6 elements (in NGC6819).
We should explore the impact of the slightly different choices for the
measurement uncertainties, discussed in Section <ref>: (i) where we scale this error by the median scaling value calculated for each element for the χ^2 distribution shown in Figure <ref>; (ii) as (i) but scaling by the mean; and (iii) leaving the
uncertainties δ x_i,n^2 to be the quadrature sum of the formal uncertainty from and the signal-to-noise dependent cross-validation uncertainty for that element. We find that all three error estimates give almost identical results and we show here the results for the median scaling. We use the median scaled errors for all the analysis that follows in the paper.
Basically, we find throughout that the intrinsic abundance mean is very close to the mean of the measurements,
as the abundance uncertainties among different stars within a cluster are similar. As the pdfs for the intrinsic abundance dispersions are by construction restricted to non-negative σ_i, we characterize them by two numbers; the most likely dispersion and the median of the pdf. This is shown in Figure <ref>: in most clusters and for
most elements the intrinsic abundance dispersions are consistent with zero, and likely to be nearly zero.
The medians of the intrinsic dispersion pdfs
(Figure <ref>) show that for all clusters with > 10 stars (enabling robust dispersion estimates) the intrinsic dispersions are typically < 0.02 dex, and < 0.04 dex for almost all of the 20 elements. There are only a few stars with spectra in NGC2158, NGC 7789, and NGC188 and the measurements for NGC188 in particular are not robust on the basis of small sample size. NGC188 has three stars and one is a clear outlier in some elements which drives the large intrinsic dispersion in this cluster.
Our analysis points towards a small but finite intrinsic dispersion in [Fe/H] for two clusters with robust measurements (NGC6819 and M67), at the level of σ = 0.02 dex;
this is true even if the abundance uncertainties were overestimated by 20%. The calculated probability of a zero dispersion value compared to the most likely value is only
p_pdf(σ_[Fe/H]=0)/p_pdf(σ_[Fe/H]_most likely) = 0.07.
In general, these intrinsic abundance dispersion limits (or measurements) for clusters are roughly consistent with the limits placed on the elements by <cit.>. But here they are based on direct abundance determinations.
§.§ Overall Abundance Similarity of Stars within Clusters and in the Field
Based on these data we can try to set some expectation for how clearly abundances can tell us when stars in the sample are, or are not, born together in the same cluster, or molecular cloud core. We do this by examining the (dis)-similarity of stars within clusters and in the field. This is relevant for assessing the much larger sample of data and chemical dimensionality and diversity of the Milky Way disk. We note that our sample of open clusters does not span the full range of properties seen in the Galactic disk (i.e. their distribution in , age and Galactocentric radius). However, these clusters do have an abundance distribution that broadly resembles that of the field (disk) red clump sample.
To explore the similarity (over all or many elements) in abundance space for stars within clusters and among field stars, one may be tempted to devise a simple
pseudo-cartesian measure of distance in abundance space, such as D_nn'^2 = ∑_i=1^I ( x_ni - x_n'i )^2,
and then compare the distributions p(D^cluster_nn') and p(D^field_nn').
However, with imperfect measurements x_ni, the use and interpretation of scalar distance measurements in a high-dimensional space is complex and depends strongly on the choices of prior assumptions,
whenever the distances become comparable to the measurement uncertainties, as is the case here;
we sketch this issue out in the Appendix. But it is straightforward to assess, via χ^2, how likely the abundance measurements of a pair of stars would be, if the true abundances were identical:
χ_nn'^2 = ∑_i=1^I[x_ni - x_n'i]^2/σ_ni^2 + σ_n'i^2.
The two star indices are n and n', and each star n has a measurement x_ni and each star n' has a measurement x_n'i, for elements i=1 to I, with uncertainty σ_ni and σ_n'i, respectively. We adopted the median-rescaled uncertainties from
Section <ref> here. For each star we determine the neighbor most likely to have identical abundances by minimizing χ_nn'^2
This χ_nn'^2 metric is then calculated for star pairs within the same cluster as well as for the stars pairs drawn from a random sample of 2000 field stars, which have analogously determined abundances (and uncertainties).
The field stars were chosen to have a logg < 3.9 (giants) and a SNR > 200 to be comparably high in SNR to the cluster sample. The results for all pairs, i.e. the distributions p^cluster_pair(χ^2) and p^field_pair(χ^2), are shown in the top panel of Figure <ref>. For these pairs we restricted comparisons to within Δ < 315K and Δ < 0.7, to guard against differences in these “nuisance labels" leading to systematic abundance differences; we found that beyond these limits the χ^2-distance in abundance space is correlated with these parameter differences. The p_pair(χ^2) for pairs from the same cluster are shown in black and pairs of 2000 random field stars are shown in the red (dashed line).
Figure <ref> demonstrates that in 20-element abundance space stars within clusters are, unsurprisingly, chemically far more similar than star pairs among the field sample. For pairs within a cluster, p^cluster_pair(χ^2) peaks at ∼ 20, as expected if all stars in a cluster had identical
abundances in all elements; however, there is a significant tail of p^intra_pair(χ^2) towards
substantively higher values of χ^2, implying that some stars differ in some elements even within
a cluster. Note that we have included the elements C and N in this comparison, even though their photospheric abundances get altered by dredge-up, to a degree that depends in giants on the mass or age. For those elements, similarity implies a combination of near-identical birth-material and age, as expected for open clusters.
For field star pairs p^field_pair(χ^2) is far broader, which is unsurprising, as members of a random field star pair will usually differ even in their most elementary abundance, [Fe/H]. The vast
majority of the p^field_pair(χ^2) values lie at χ^2 values far in excess of χ^2 ∼ 20. However, there is a small fraction of field pairs (∼ 1-2%) whose 20-element abundances are indistinguishable χ^2 ≤ 25 despite the 0.02-0.03 dex precision that affords for individual elements.
An obvious next question to ask is whether the differences between p^cluster_pair(χ^2)
and p^field_pair(χ^2), shown in Figure <ref> are primarily driven by
differences in the basic [Fe/H], rather than the high-dimensional [X/Fe]. To do this, we
drew up the distributions p^cluster_pair(χ^2) and p^field_pair(χ^2), but restricted to solar [Fe/H]. For the intra-cluster pairs we consider only two clusters, M67
([Fe/H]=0.0) and NGC6819 ([Fe/H]=0.03); for this second selection of field pairs, this meant we restricted the sample to red giant stars with [Fe/H]=0.00±0.02. A total of 1054 solar metallicity field giants (again selected with SNR > 200) were used for this comparison. The resulting distributions are shown in the bottom panel of Figure <ref>.
This Figure shows that the two distributions (intra-cluster and field pairs) remain distinctly different: there is valuable discriminating information in the [X/Fe]. However, considering
a priori only pairs of the same metallicity naturally increases the overlap between the distributions substantially.
In Figure <ref> there are many more pairs (especially intra-cluster pairs) at very small values of χ^2, far more than expected from the chi-squared distribution with 19 or 18 degrees of freedom.
This could have multiple origins, related to our chemical-abundance uncertainty model.
Our chemical-space uncertainty analysis is fundamentally empirical; it presumes that all stars are comparable in their noise properties.
If instead there are variations in the uncertainties, or if the chemical-abundance estimate noise is very non-Gaussian, the distribution of χ^2 values can depart strongly from a chi-squared distribution.
Presumably we are seeing this effect.
In addition, the intra-cluster pair distribution contains a far larger fraction of very small χ^2 values compared to the field pair distribution. This suggests that there are at least some stars for which the chemical abundances are measured with true uncertainties that are much smaller than our baseline estimates.
That is, we may be measuring things more precisely than what is implied by our current (relatively conservative) uncertainty analysis.
§ THE FRACTION OF GALACTIC DOPPELGANGERS
Figure <ref> characterizes the distribution of the abundance similarity of stars that are known to be born together p^cluster_pair(χ^2), compared to that of random field stars, p^field_pair(χ^2). From this, we can determine the relative fraction of stars that appear as chemically similar as cluster stars do to one another, yet are not born in the same environment, so called doppelgangers.
For the field, 0.3% of random pairs have a χ^2 difference that is as small as the median χ^2 among intra-cluster pairs. An additional constraint for stars that are born together is that these stars must have the same metallicity. Therefore, to assess the implications for the viability of strict chemical tagging, we should consider the abundance similarity of stars at a single metallicity, as shown in the bottom panel of Figure <ref>. For the stars at a single (solar) metallicity, 1.0% of field pairs have χ^2 differences as small as the median χ^2 among intra-cluster pairs; these stars are doppelganger.
In examining the similarity of the field pairs at the same metallicity of [Fe/H] = 0, probable new members of two clusters were discovered, as detailed in Section 6. Although negligible in relative number of all the pair combinations, these were removed from the determination of the doppelganger rate, calculated above (to be 1.0% at [Fe/H] = 0). The new probable members identified demonstrates that, with the χ^2 pair similarity analysis, we have developed an effective technique to identify new cluster members, when combined with radial velocity measurements.
§ IDENTIFICATION OF NEW CLUSTER MEMBERS
We examined all `sibling' field pairs that are as chemically similar as cluster pairs, to ensure they are doppelgangers and not true cluster members that are simply not included in our list of analyzed clusters. From this, we found four new potential cluster members (all with χ^2 < 7); one pair associated with M67 and one pair associated with NGC7789.
We found these pairs only by using their chemical similarity to each other (as similar as that of the intra-cluster pairs) and then by observing that they had a common velocity and were in proximity to open clusters of the same [Fe/H] and mean radial velocity as the cluster.
The properties of the two newly identified member pairs from the field are summarised in Table 3 (Appendix). The positions of these stars and their radial velocities along with the cluster centre and surrounding field stars are shown coloured by radial velocity in Figure <ref>. The stars associated with NGC7789 have velocities of -53.6 and -54.9 kms^-1 and the stars we associate with M67 have velocities of 34.2 and 34.1 kms^-1. These velocity measurements are all within the respective cluster velocity dispersions: the mean velocity and velocity dispersion of M67 is 33.6 ± 0.8 kms^-1 <cit.>. The mean velocity and velocity dispersion of NGC7789 is -54.9 ± 0.9 kms^-1 <cit.>.
These pairs were excluded from our doppelganger rate determination in Section 5, as they are potentially actual cluster members. We emphasize that these were not identified as members of the clusters via a test of the similarity of their 19 abundances to the clusters themselves.
These pairs have abundance measurements that are generally consistent with those of the clusters with some exceptions. These exceptions in particular, in the case of M67, lead to a high χ^2 (> 20) between the potential new members of M67 and the known M67 member stars. Based on a comparison of the abundances of these two stars with the known M67 stars alone, these would not be considered members, under the assumption of near zero intrinsic abundance spread. These differences however, may be associated with temperature and systematics – including those of an astrophysical origin (see the Appendix). The abundances in all 20 elements of the potential new member stars are shown with the mean values for the M67 and NGC7789 clusters in Figure <ref>. The mean and of the M67 cluster stars is 4770K and 2.8 dex, respectively. The pair of stars we associate with M67 are hotter and have higher gravities, at ∼ 5200K and 3.7 dex (see Table 3 and Figure C1 in the Appendix). Although the high χ^2 is particularly driven by a few outlying elements in particular (e.g. [N/Fe] and [Mg/Fe]), even with the largest outliers removed from the χ^2 comparison, the new potential members remain at a larger χ^2 distance from the M67 members compared to the intra-cluster pair comparison. For the pair associated with NGC7789, with temperature and values around 4900K and 2.7 dex, (compared to the cluster mean of 4600 K and 2.3 dex), three of the NGC7789 cluster stars are within χ^2 < 19 of these potential new members. The proper motions, included in Table 3, are consistent with these four stars being open cluster members.
§ DISCUSSION AND CONCLUSION
In this paper we set out to quantify the similarity among Galactic disk stars with respect to their
detailed photospheric element abundances, to clarify the prospects of chemical tagging with data of 's quality.
For such an analysis we could draw on a unique abundance data set, derived by applying a modified
version of to spectra to assure maximal abundance precision and well-understood uncertainties.
This re-analysis allowed for simultaneous fitting the spectral line-spread-function. Thus, eliminating a likely a source of
subtle but correlated systematic errors in the abundance estimates. The quantification of the abundance uncertainties
included SNR-dependent cross-validation and an error-rescaling for some elements, exploiting the fact that
abundance estimates for member-stars of open clusters should on average not differ by less than their uncertainties.
This left us with a set of -derived abundances that is unprecedented
in its (pertinent) combination of quality, quantity and homogeneity. We only considered stars with very similar
stellar parameters, giants in a restricted T_eff and log g range;
we have 90 spectra across seven open clusters and thousands of “field stars” with the same data,
SNR and data analysis. For all these stars we have individual abundances for 20 elements
(Fe, C, N, O, Na, Mg, Al, Si, S, K, Ca, Ti, V, Mn, Ni, P, Cr, Co, Cu, Rb), with a median precision of 0.03 dex. We characterize these abundances
by their values [X/Fe], except for iron where we use [Fe/H].
On this basis, we undertook an extensive analysis of the chemical (in-)homogeneity of stars within an open cluster,
testing the standard assumption that such stellar “birth siblings”
were formed at the same time from chemically homogeneous material. This should result in identical element abundances,
including those elements (e.g. C and N), whose photospheric abundance has been altered in giants by mass/age-dependent
dredge-up. We confirm that the abundances of cluster member stars typically agree with each other within
their small (0.03 dex) uncertainties, as had been found before (see Section 1). Moving beyond, we explicitly determined the
intrinsic dispersion in these abundances, or derived upper limits for them. We found that
(at least in the best-sampled clusters) the abundance dispersions are essentially zero,
with typical upper limits of ∼ 0.03 dex. However, there are exceptions: some elements in some clusters show
small but significant dispersion, attributable to a subset of the member stars.
To compare the abundance similarity of stars with a known common birth origin (cluster members) to
the mutual similarity of field stars, we compared the abundances among pairs of stars. It was tempting to
define a distance measure in the high-dimensional space of element abundances; but the “curse of dimensionality”
makes distance in a 20-dimensional space very dependent on prior assumptions about distance distributions, once the distances
become comparable to the measurement uncertainties. Therefore, we resorted to a simple χ^2 statistic,
p_pair(χ^2), quantifying the likelihood of the 20 observed abundance (differences)
for any pair of stars, assuming their intrinsic abundances were identical.
When we construct p^cluster_pair(χ^2) for intra-cluster pairs of stars, we find that its median is similar to
N_elements≈ 20, again illustrating the level of chemical homogeneity within clusters. However, about 20% of intra-cluster star pairs have χ^2≥ 40 indicating significant abundance differences in one or more elements.
We find open clusters to be mostly, but not exclusively homogeneous.
When we construct p^field_pair(χ^2) for pairs of field stars, it looks overall very different.
The vast majority of the support of this distribution lies at χ^2≥ 40: most pairs of field stars can be clearly recognized as having differing abundances. In particular, 99.7% of the field pairs have χ^2 in excess of the median χ^2 for intra-cluster pairs. Part of why p^field_pair(χ^2) looks so different from
p^cluster_pair(χ^2) is of course that the Galactic (thin) disk has a metallicity spread of about 1 dex;
but considering the analogous distributions restricted to solar metallicity, shows that the remaining abundances
still have great discriminating power. Most commonly the abundances of field pairs – at the same [Fe/H] (e.g. |[Fe/H]| < 0.02) – are inconsistent with being identical; 99% of field star pairs of the same metallicity appear chemically non-identical on the basis of their other element abundances with data.
Of course, the above result implies in turn that 0.30% of random pairs among Galactic disk have indistinguishable abundances,
even with typical 0.03 dex measurement precision for ∼ 20 elements that data afford; this has remarkable consequences.
Such pairs are much more common than any plausible incidence of common-birth-site (former cluster), 10^-4 ... -6.
Therefore most of them cannot be birth siblings, but mere doppelganger: stars not immediately related by birth, yet looking near-perfectly alike in their abundance patterns. This rate of field star doppelganger has not been
quantified before, and has important implications for chemical tagging. Note that this rate estimate applies to stars with typical Galactic disk abundances, -0.7≤[Fe/H]≤ 0.3 and [α/Fe] ∼ solar, where abundance space is most densely populated with stars. In the metal poor old disk or the halo, doppelganger may be much more lower <cit.>, as abundances show greater diversity,
and therefore abundance space is more sparsely populated.
It will be interesting to explore whether the inclusion of extensive sets of s- and r-process elements (e.g. from Galah) will change this picture qualitatively.
It is also worth noting that the χ^2 statistic that we use here – while extremely straightforward – is unlikely the optimal way to discriminate cluster pairs from field pairs. In particular, there are elements (foremost [Fe/H] and α-elements) in which the abundance variation among field pairs is large, compared to our measurement precision. Conversely, there may be elements (when viewed as [X/Fe]), where even random field pairs are unlikely to differ by more than the measurement uncertainties (see Figure <ref>). Those elements are then rather uninformative when it comes to discriminating the two hypothesis, and will mostly add variance in the χ^2 sum. This suggests that a sum of the individual element differences, without weighting and covariance terms, is likely sub-optimal. However, tests on our current set of elements, taking only a subset of the most informative elements for our χ^2 calculation indicates there is no gain expected from removing the least informative elements. A rigorous development of a near-optimal statistic to discriminate birth-siblings from field pairs (i.e. minimizing the doppelganger rate) requires a full characterization of the (error-deconvolved) 20-dimensional abundance distribution of the field <cit.>. This is beyond the scope of this paper. However, simple experiments to restrict the χ^2 sum of the abundance differences to subsets of the most discriminating elements, showed that the basic picture drawn up here is unlikely to change; the existence of an important doppelganger population in -like data is not a consequence of the sub-optimal χ^2 statistic.
To summarize the assessment of abundance (χ^2) similarities or differences,
with 20 abundances at a typical 0.03 dex precision level at hand:
most - but not all – pairs of stars in clusters are chemically alike; most – but not all – pairs of field stars are chemically different.
In conclusion, we can now discuss what these findings mean for “chemical tagging”. Such studies of chemical similarity
among stars in the context of Milky Way evolution have claimed two main goals. One is an exploration of the
Galactic history of chemically homogeneous birth sites of stars, i.e. the history of the (now disrupted) cluster mass function <cit.>. This “strict chemical tagging" would require attributing a common birth origin to, say, pairs of stars with considerable confidence, on the basis of abundances alone. Even in optimistic scenarios for the main stellar disk of the Milky Way, only 10^-4.5 random star pairs would be such birth siblings (our doppelganger measurement is equivalent to 10^-2).
The other goal is the empirical derivation of a successively more detailed chemo-orbital distribution of stars in the Galaxy: stars of which abundances are on which orbits. Even considering only [Fe/H] and [α/Fe], it is well known that
the Galactic disk structure varies distinctly <cit.>. Therefore, more intricate patterns, involving more abundances and ages could and should provide better constraints on radial migration or heating.
Such “broad chemical tagging" (or chemical labeling) studies take detailed abundances foremost as a lifelong label of stars,
independent of orbital phase, without making explicit reference to a common birth site among pairs or groups of stars.
Our results presented here, in particular the significant incidence of doppelganger stars in the field imply that
strict chemical tagging is presumably not possible, certainly not for the Milky Way's main stellar disk components, even with the data quality level presented here. When considering
whether a pair of stars was likely born in the same site at the same time, one must consider not only the
data likelihoods presented above, but also the prior expectation for “sibling” or doppelganger, p_sibl
and p_dg, respectively. Then we have
p(sibl | data)/p(dg | data) = p(data | sibl)/p(data | dg)×p_prior(sibl)/p_prior(dg),
where
p(data | sibl)/p(data | dg)≈ p^cluster_pair(data)/p^field_pair(data)
from Figure <ref>,
and the ratio of priors is p_prior(sibl)/p_prior(dg)=10^-4 ... -6. As Figure <ref> shows,
p^cluster_pair(data)/p^field_pair(data) have basically the same support and that the ratio of priors is so small,
it seems hard to imagine to get p(sibl | data)/p(dg | data)≥ 1.
While this analysis may dampen the prospects for strict chemical tagging, it is good news for detailed chemical labeling;
the analysis shows that the Galactic stellar disk population can be “sliced” into much more detailed abundance-based ways,
which will provide new, and probably powerful, ways to constrain the formation of the Milky Way. The C and N-based
age estimates are just one recent example of this approach. Furthermore, combining velocity information (and eventually full dynamics) with the abundance analysis demonstrates promising prospects for identification of cluster members.
§ ACKNOWLEDGEMENTS
The authors thank Andy Gould for useful discussions. The authors thank Baitan Tang for a review of the manuscript.
M. Ness and H.-W. Rix acknowledge funding from the European Research Council under the
European Union's Seventh Framework Programme (FP 7) ERC Advanced Grant Agreement n. [321035]. H.-W. Rix acknowledges support of the Miller Institute at UC Berkeley through a visiting professorship during the completion of this work.
DWH was partially supported by the NSF (grants IIS-1124794 and AST-1517237), NASA (grant NNX12AI50G), and the Moore-Sloan Data Science Environment at NYU.
D.G. and D.M. gratefully acknowledge support from the BASAL Centro de Astrofsica y Tecnologias Afines (CATA) grant PFB-06/2007. D.M. is also supported by the Ministry for the Economy, Development and Tourism, Programa Iniciativa Cientifica Milenio grant IC120009, awarded to the Millennium Institute of Astrophysics (MAS), and by FONDECYT No. 1130196
PMF gratefully acknowledges support from the National Science Foundation through award AST–1311835
Funding for the Sloan Digital Sky Survey IV has been provided by the
Alfred P. Sloan Foundation, the U.S. Department of Energy Office of
Science, and the Participating Institutions. SDSS acknowledges
support and resources from the Center for High-Performance Computing at
the University of Utah. The SDSS web site is www.sdss.org.
SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration including the Brazilian Participation Group, the Carnegie Institution for Science, Carnegie Mellon University, the Chilean Participation Group, the French Participation Group, Harvard-Smithsonian Center for Astrophysics, Instituto de Astrofisica de Canarias, The Johns Hopkins University, Kavli Institute for the Physics and Mathematics of the Universe (IPMU) / University of Tokyo, Lawrence Berkeley National Laboratory, Leibniz Institut für Astrophysik Potsdam (AIP), Max-Planck-Institut fur Astronomie (MPIA Heidelberg), Max-Planck-Institut fur Astrophysik (MPA Garching), Max-Planck-Institut fur Extraterrestrische Physik (MPE), National Astronomical Observatories of China, New Mexico State University, New York University, University of Notre Dame, Observatório Nacional / MCTI, The Ohio State University, Pennsylvania State University, Shanghai Astronomical Observatory, United Kingdom Participation Group, Universidad Nacional Autonoma de México, University of Arizona, University of Colorado Boulder, University of Oxford, University of Portsmouth, University of Utah, University of Virginia, University of Washington, University of Wisconsin, Vanderbilt University, and Yale University.
§ DISTANCES IN HIGH-DIMENSIONAL SPACES
In the context of chemical tagging it is tempting to assign a scalar or vector measure of abundance-space distance, D (or D⃗) to pairs of stars.
Such a measure of distance would have to be derived from abundance measurements with uncertainties. In this Appendix we show why the interpretation of such distance measures becomes problematic in high-dimensional spaces, as soon as the true or presumed distances become comparable to the measurement uncertainties. This is why in the analysis presented in this paper, we restrict ourselves to asking how likely the abundance data on a pair of stars are (in the χ^2-sense) if their abundances were identical.
Let us first consider how high-dimensional the space of abundances is; already here is no simple unique answer. Several measures of dimensionality may play a role here: in principle, the dimensionality of abundance space is the length of the periodic system (and then, there are isotopes); in practice, measurements only provide constraints on a subset of elements, between 1 and 35 elements; finally, there is the question of how many astrophysically non-degenerate dimensions abundance space has.
In principle, it is straightforward to assign a metric to abundance space,
e.g. some single Cartesian measure of distance
between two points in L-dimensional abundance space: D_nn'^2≡∑_i=1^I D_i^2, with D_i≡ [X_n,i/H]-[X_n',i/H].
However, in practice we would need to make a probabilistic statement about the abundance-space distance between two stars, given a set of abundance measurements for those two stars. To get a pdf for any measure of distance, one would have to specify a prior expectation for that distance
measure, which then gets modified by the data likelihood.
But if one wants to have a pdf for a simple scalar/1D distance measure in light
of independent measurements in a high-dimensional abundance space, it appears difficult to avoid that the resulting p_pdf(D_scalar) depends very strongly on the priors, if the intrinsic distance becomes comparable to the measurement uncertainties; this is the pertinent regime for chemical tagging.
To illustrate this, we will spell out the case for a one-dimensional abundance measurement specifically, and then discuss how to generalize them.
For a pair of stars we have measurements of their abundances and , with their measurement uncertainties and . We presume that their true abundances can
be characterized by and D, as - D/2 and + D/2, with D=2d.
We then get
p_pdf(D, , , , )
∝ p_L( , D,, , )· p_prior(D,).
We can now spell out the data likelihood
p_L( , D,, , )=
1/2/2π ( exp - [ (-+D/2)^2/2^2+
(--D/2)^2/2^2 ]
+exp - [ (--D/2)^2/2^2+(-+D/2)^2/2^2 ] ) ,
when one integrates over (with a flat prior in ) and simplifies this becomes
p_L( - D, , )=
1/√(2π (^2 + ^2))exp (- D^2+(-)^2/2(^2+^2) )×cosh ( D(-)/^2+^2 ).
For the hypothesis that the abundances are identical, i.e. D≡0, this becomes:
p_L,D≡ 0( - , )∝1/√(2π (^2 + ^2))exp (- (-)^2/2(^2+^2) ) ,
corresponding to the simple
χ^2-expression from Eq. 2, describing the likelihood of the data under the hypothesis that D=d≡0.
An extension of Eq.<ref> to the L-dimensional case, say L=10, brings us to a well-established conundrum. If we stick to the seemingly natural prior
p_prior(D_l)=ℋ(D_l) for all l, then this implies as a prior
for the L-distance D_L: p_prior(D_L)∝ (D_L)^L.
The same problem arises for any prior p_prior(D_l) that is flat for small
|D_l|. Like a χ^2 distribution, the inferred most likely (scalar) distance D^2 would always tend to the squared sum of the measurement uncertainties.
§ NEW CLUSTER MEMBERS
lccccccc
Newly identified NGC7789 and M67 Cluster Members from the Field
8
3
0pt
No. 2MASS ID V_helio Distance from cluster centre Teff log g PMRA PMDEC
(kms^-1) (deg) (K) (dex) masyr^-1 masyr^-1
1 2M23564304+5650477 -53.63 0.1 4890 2.7 -4.7 ± 5.3 -1.6 ± 5.3
2M23570895+5648504 -54.90 0.17 4927 2.7 0.1 ± 3.9 2.1 ± 3.9
2 2M08510018+1154321 34.2 0.15 5202 3.7 -7.4 ± 1.0 -5.4 ± 1.0
2M08511877+1151186 34.1 0.06 5161 3.7 -6.9 ± 1.1 -6.2 ± 1.1
Identified additional (pair) members of the clusters NGC7789 (pair 1) and M67 (pair 2). The proper motions are sourced from the PPMXL catalogue <cit.>. The Proper motion of M67 is (PMRA, PMDEC) = (-7.64 ± 0.07, -5.98 ± 0.07) masyr^-1 <cit.> and of NGC7789 is (PMRA, PMDEC) = (-2.2 ± 0.22, -1.1 ± 0.22) masyr^-1.
§ MIXING OF CARBON AND NITROGEN ALONG THE GIANT BRANCH
Figure <ref> shows both the known cluster stars and our new members in the and plane for M67. For the pair comparisons, the stars are restricted to being near in and . However for the comparison shown in Figure <ref>, while the pairs are themselves similar, these are not required to be similar to the cluster stars from which the mean abundance values are determined. There are stars that are newly identified members at the hot end of the giant branch and these stars have a [N/Fe] measurement that falls outside the 1-σ mean cluster measurements (calculated from stars at intermediate and values along the giant branch). There are also small discrepancies in a few of the elements in M67 for the hot stars at the base of the giant branch compared to the mean abundance of M67 stars and this is also likely due to small systematic dependencies in temperature from the stellar model abundance determinations propagated by The Cannon (see the caption of Figure <ref>).
natexlab#1#1
[Bovy(2016)]Bovy2016
Bovy, J. 2016, , 817, 49
[Bovy et al.(2012)Bovy, Rix, Liu, Hogg, Beers, &
Lee]Bovy2012
Bovy, J., Rix, H.-W., Liu, C., et al. 2012, , 753, 148
[Casey et al.(2016a)Casey, Hogg, Ness, Rix,
Ho, & Gilmore]Casey2016
Casey, A. R., Hogg, D. W., Ness, M., et al. 2016a, ArXiv
e-prints, arXiv:1603.03040
[Casey et al.(2016b)Casey, Hawkins, Hogg,
Ness, Walter-Rix, Kordopatis, Kunder, Steinmetz, Koposov, Enke,
Sanders, Gilmore, Zwitter, Freeman, Casagrande, Matijevič,
Seabroke, Bienaymé, Bland-Hawthorn, Gibson, Grebel, Helmi,
Munari, Navarro, Reid, Siebert, & Wyse]Casey2016b
Casey, A. R., Hawkins, K., Hogg, D. W., et al. 2016b,
ArXiv e-prints, arXiv:1609.02914
[Cirasuolo et al.(2012)Cirasuolo, Afonso, Bender,
Bonifacio, Evans, Kaper, Oliva, Vanzi, Abreu, Atad-Ettedgui,
Babusiaux, Bauer, Best, Bezawada, Bryson, Cabral, Caputi,
Centrone, Chemla, Cimatti, Cioni, Clementini, Coelho, Daddi,
Dunlop, Feltzing, Ferguson, Flores, Fontana, Fynbo, Garilli,
Glauser, Guinouard, Hammer, Hastings, Hess, Ivison, Jagourel,
Jarvis, Kauffman, Lawrence, Lee, Li Causi, Lilly, Lorenzetti,
Maiolino, Mannucci, McLure, Minniti, Montgomery, Muschielok,
Nandra, Navarro, Norberg, Origlia, Padilla, Peacock, Pedicini,
Pentericci, Pragt, Puech, Randich, Renzini, Ryde, Rodrigues,
Royer, Saglia, Sánchez, Schnetler, Sobral, Speziali, Todd,
Tolstoy, Torres, Venema, Vitali, Wegner, Wells, Wild, &
Wright]C2012
Cirasuolo, M., Afonso, J., Bender, R., et al. 2012, in , Vol.
8446, Ground-based and Airborne Instrumentation for Astronomy IV, 84460S
[Cunha et al.(2015)Cunha, Smith, Johnson, Bergemann,
Mészáros, Shetrone, Souto, Allende Prieto, Schiavon,
Frinchaboy, Zasowski, Bizyaev, Holtzman, García Pérez,
Majewski, Nidever, Beers, Carrera, Geisler, Gunn, Hearty,
Ivans, Martell, Pinsonneault, Schneider, Sobeck, Stello,
Stassun, Skrutskie, & Wilson]Cuhna2015
Cunha, K., Smith, V. V., Johnson, J. A., et al. 2015, , 798, L41
[Dalton et al.(2014)Dalton, Trager, Abrams, Bonifacio,
López Aguerri, Middleton, Benn, Dee, Sayède, Lewis,
Pragt, Pico, Walton, Rey, Allende Prieto, Peñate, Lhome,
Agócs, Alonso, Terrett, Brock, Gilbert, Ridings, Guinouard,
Verheijen, Tosh, Rogers, Steele, Stuik, Tromp, Jasko, Kragt,
Lesman, Mottram, Bates, Gribbin, Rodriguez, Delgado, Martin,
Cano, Navarro, Irwin, Lewis, Gonzalez Solares, O'Mahony,
Bianco, Zurita, ter Horst, Molinari, Lodi, Guerra, Vallenari,
& Baruffolo]D2012
Dalton, G., Trager, S., Abrams, D. C., et al. 2014, in , Vol.
9147, Ground-based and Airborne Instrumentation for Astronomy V, 91470L
[de Jong & Consortium(2015)]deJong2015
de Jong, R. S., & Consortium, . 2015, IAU General Assembly, 22, 2255843
[De Silva et al.(2007)De Silva, Freeman, Asplund,
Bland-Hawthorn, Bessell, & Collet]deSilva2007
De Silva, G. M., Freeman, K. C., Asplund, M., et al. 2007, , 133,
1161
[de Silva et al.(2009)de Silva, Gibson, Lattanzio, &
Asplund]deSilva2009
de Silva, G. M., Gibson, B. K., Lattanzio, J., & Asplund, M. 2009,
, 500, L25
[De Silva et al.(2015)De Silva, Freeman, Bland-Hawthorn,
Martell, de Boer, Asplund, Keller, Sharma, Zucker, Zwitter,
Anguiano, Bacigalupo, Bayliss, Beavis, Bergemann, Campbell,
Cannon, Carollo, Casagrande, Casey, Da Costa, D'Orazi, Dotter,
Duong, Heger, Ireland, Kafle, Kos, Lattanzio, Lewis, Lin,
Lind, Munari, Nataf, O'Toole, Parker, Reid, Schlesinger,
Sheinis, Simpson, Stello, Ting, Traven, Watson, Wittenmyer,
Yong, & Žerjal]deSilva2015
De Silva, G. M., Freeman, K. C., Bland-Hawthorn, J., et al. 2015,
, 449, 2604
[Freeman & Bland-Hawthorn(2002)]freeman2002
Freeman, K., & Bland-Hawthorn, J. 2002, , 40, 487
[Freeman(2012)]Freeman2012
Freeman, K. C. 2012, in Astronomical Society of the Pacific Conference
Series, Vol. 458, Galactic Archaeology: Near-Field Cosmology and the
Formation of the Milky Way, ed. W. Aoki, M. Ishigaki, T. Suda,
T. Tsujimoto, & N. Arimoto, 393
[Frinchaboy et al.(2013)Frinchaboy, Thompson, Jackson,
O'Connell, Meyer, Zasowski, Majewski, Chojnowksi, Johnson,
Allende Prieto, Beers, Bizyaev, Brewington, Cunha, Ebelke,
García Pérez, Hearty, Holtzman, Kinemuchi, Malanushenko,
Malanushenko, Marchante, Mészáros, Muna, Nidever,
Oravetz, Pan, Schiavon, Schneider, Shetrone, Simmons, Snedden,
Smith, & Wilson]F2013
Frinchaboy, P. M., Thompson, B., Jackson, K. M., et al. 2013, ,
777, L1
[Gao(2016)]Gao2016
Gao, X.-H. 2016, Research in Astronomy and Astrophysics, 16, 184
[García Pérez et al.(2016)García Pérez,
Allende Prieto, Holtzman, Shetrone, Mészáros, Bizyaev,
Carrera, Cunha, García-Hernández, Johnson, Majewski,
Nidever, Schiavon, Shane, Smith, Sobeck, Troup, Zamora,
Weinberg, Bovy, Eisenstein, Feuillet, Frinchaboy, Hayden,
Hearty, Nguyen, O'Connell, Pinsonneault, Wilson, &
Zasowski]GP2016
García Pérez, A. E., Allende Prieto, C., Holtzman, J. A.,
et al. 2016, , 151, 144
[Geller et al.(2015)Geller, Latham, & Mathieu]Geller2015
Geller, A. M., Latham, D. W., & Mathieu, R. D. 2015, , 150, 97
[Gilmore et al.(2012)Gilmore, Randich, Asplund, Binney,
Bonifacio, Drew, Feltzing, Ferguson, Jeffries, Micela,
Negueruela, Prusti, Rix, Vallenari, Alfaro, Allende-Prieto,
Babusiaux, Bensby, Blomme, Bragaglia, Flaccomio, Francois,
Irwin, Koposov, Korn, Lanzafame, Pancino, Paunzen,
Recio-Blanco, Sacco, Smiljanic, van Eck, & Walton]gilmore2012
Gilmore, G., Randich, S., Asplund, M., et al. 2012, The Messenger, 147,
25
[Gim et al.(1998)Gim, Hesser, McClure, &
Stetson]Gim1998
Gim, M., Hesser, J. E., McClure, R. D., & Stetson, P. B. 1998, ,
110, 1172
[Hayden et al.(2014)Hayden, Holtzman, Bovy, Majewski,
Johnson, Allende Prieto, Beers, Cunha, Frinchaboy, García
Pérez, Girardi, Hearty, Lee, Nidever, Schiavon, Schlesinger,
Schneider, Schultheis, Shetrone, Smith, Zasowski, Bizyaev,
Feuillet, Hasselquist, Kinemuchi, Malanushenko, Malanushenko,
O'Connell, Pan, & Stassun]Hayden2014
Hayden, M. R., Holtzman, J. A., Bovy, J., et al. 2014, , 147, 116
[Ho et al.(2016)Ho, Ness, Hogg, Rix, Liu, Yang,
Zhang, Hou, & Wang]Ho2016
Ho, A. Y. Q., Ness, M. K., Hogg, D. W., et al. 2016, ArXiv e-prints,
arXiv:1602.00303
[Hogg et al.(2016)Hogg, Casey, Ness, Rix,
Foreman-Mackey, Hasselquist, Ho, Holtzman, Majewski, Martell,
Meszaros, NIdever, & Shetrone]Hogg2016
Hogg, D. W., Casey, A. R., Ness, M., et al. 2016, ArXiv e-prints,
arXiv:1601.05413
[Holtzman et al.(2015)Holtzman, Shetrone, Johnson, Allende
Prieto, Anders, Andrews, Beers, Bizyaev, Blanton, Bovy,
Carrera, Chojnowski, Cunha, Eisenstein, Feuillet, Frinchaboy,
Galbraith-Frew, García Pérez, García-Hernández,
Hasselquist, Hayden, Hearty, Ivans, Majewski, Martell,
Meszaros, Muna, Nidever, Nguyen, O'Connell, Pan, Pinsonneault,
Robin, Schiavon, Shane, Sobeck, Smith, Troup, Weinberg,
Wilson, Wood-Vasey, Zamora, & Zasowski]Holtzman2015
Holtzman, J. A., Shetrone, M., Johnson, J. A., et al. 2015, , 150,
148
[Kunder et al.(2016)Kunder, Kordopatis, Steinmetz,
Zwitter, McMillan, Casagrande, Enke, Wojno, Valentini,
Chiappini, & Matijevic]Kunder2016
Kunder, A., Kordopatis, G., Steinmetz, M., et al. 2016, ArXiv e-prints,
arXiv:1609.03210
[Lambert & Reddy(2016)]Lambert2016
Lambert, D. L., & Reddy, A. B. S. 2016, , 831, 202
[Liu et al.(2016)Liu, Asplund, Yong, Meléndez,
Ramírez, Karakas, Carlos, & Marino]Liu2016
Liu, F., Asplund, M., Yong, D., et al. 2016, , 463, 696
[Majewski(2012)]Majewski2012
Majewski, S. R. 2012, in American Astronomical Society Meeting Abstracts,
Vol. 219, American Astronomical Society Meeting Abstracts 219, 205.06
[Majewski et al.(2016)Majewski, APOGEE Team, & APOGEE-2
Team]Majewski2016
Majewski, S. R., APOGEE Team, & APOGEE-2 Team. 2016, Astronomische
Nachrichten, 337, 863
[Majewski et al.(2012)Majewski, Nidever, Smith, Damke,
Kunkel, Patterson, Bizyaev, & García
Pérez]Majewski2012a
Majewski, S. R., Nidever, D. L., Smith, V. V., et al. 2012, , 747,
L37
[Martell et al.(2016)Martell, Shetrone, Lucatello,
Schiavon, Mészáros, Allende Prieto, García
Hernández, Beers, & Nidever]Martell2016
Martell, S. L., Shetrone, M. D., Lucatello, S., et al. 2016, , 825,
146
[Martig et al.(2016)Martig, Fouesneau, Rix, Ness,
Mészáros, García-Hernández, Pinsonneault,
Serenelli, Silva Aguirre, & Zamora]Martig2016
Martig, M., Fouesneau, M., Rix, H.-W., et al. 2016, , 456, 3655
[Mészáros et al.(2013)Mészáros, Holtzman,
García Pérez, Allende Prieto, Schiavon, Basu, Bizyaev,
Chaplin, Chojnowski, Cunha, Elsworth, Epstein, Frinchaboy,
García, Hearty, Hekker, Johnson, Kallinger, Koesterke,
Majewski, Martell, Nidever, Pinsonneault, O'Connell, Shetrone,
Smith, Wilson, & Zasowski]Meszaros2013
Mészáros, S., Holtzman, J., García Pérez, A. E.,
et al. 2013, , 146, 133
[Mitschang et al.(2013)Mitschang, De Silva, Sharma, &
Zucker]M2013
Mitschang, A. W., De Silva, G., Sharma, S., & Zucker, D. B. 2013,
, 428, 2321
[Mitschang et al.(2014)Mitschang, De Silva, Zucker,
Anguiano, Bensby, & Feltzing]M2014
Mitschang, A. W., De Silva, G., Zucker, D. B., et al. 2014, ,
438, 2753
[Ness et al.(2015)Ness, Hogg, Rix, Ho, &
Zasowski]Ness2015
Ness, M., Hogg, D. W., Rix, H.-W., Ho, A. Y. Q., & Zasowski, G.
2015, , 808, 16
[Nidever et al.(2015)Nidever, Holtzman, Allende Prieto,
Beland, Bender, Bizyaev, Burton, Desphande, Fleming,
García Pérez, Hearty, Majewski, Mészáros, Muna,
Nguyen, Schiavon, Shetrone, Skrutskie, Sobeck, &
Wilson]Nidever2015
Nidever, D. L., Holtzman, J. A., Allende Prieto, C., et al. 2015, ,
150, 173
[Reddy et al.(2012)Reddy, Giridhar, & Lambert]Reddy2012
Reddy, A. B. S., Giridhar, S., & Lambert, D. L. 2012, , 419, 1350
[Reddy et al.(2015)Reddy, Giridhar, & Lambert]Reddy2015
—. 2015, , 450, 4301
[Roeser et al.(2010)Roeser, Demleitner, &
Schilbach]Roeser2010
Roeser, S., Demleitner, M., & Schilbach, E. 2010, , 139, 2440
[Schiavon et al.(2016)Schiavon, Zamora, Carrera,
Lucatello, Robin, Ness, Martell, Smith,
García-Hernández, Manchado, Schönrich, Bastian,
Chiappini, Shetrone, Mackereth, Williams, Mészáros,
Allende Prieto, Anders, Bizyaev, Beers, Chojnowski, Cunha,
Epstein, Frinchaboy, García Pérez, Hearty, Holtzman,
Johnson, Kinemuchi, Majewski, Muna, Nidever, Nguyen, O'Connell,
Oravetz, Pan, Pinsonneault, Schneider, Schultheis, Simmons,
Skrutskie, Sobeck, Wilson, & Zasowski]Schiavon2016
Schiavon, R. P., Zamora, O., Carrera, R., et al. 2016, ,
arXiv:1606.05651
[SDSS Collaboration et al.(2016)SDSS Collaboration, Albareti,
Allende Prieto, Almeida, Anders, Anderson, Andrews,
Aragon-Salamanca, Argudo-Fernandez, Armengaud, & et al.]sdssiv
SDSS Collaboration, Albareti, F. D., Allende Prieto, C., et al. 2016,
ArXiv e-prints, arXiv:1608.02013
[Souto et al.(2016)Souto, Cunha, Smith, Allende Prieto,
Pinsonneault, Zamora, Anibal García-Hernández, Meszaros,
Bovy, Elia García Pérez, Anders, Bizyaev, Carrera,
Frinchaboy, Holtzman, Ivans, Majewski, Shetrone, Sobeck, Pan,
Tang, Villanova, & Geisler]Souto2016
Souto, D., Cunha, K., Smith, V., et al. 2016, ArXiv e-prints,
arXiv:1607.06102
[Ting et al.(2015)Ting, Conroy, & Goodman]Ting2015
Ting, Y.-S., Conroy, C., & Goodman, A. 2015, , 807, 104
[Ting et al.(2016)Ting, Conroy, & Rix]Ting2016
Ting, Y.-S., Conroy, C., & Rix, H.-W. 2016, , 816, 10
[Wilson et al.(2012)Wilson, Hearty, Skrutskie, Majewski,
Schiavon, Eisenstein, Gunn, Holtzman, Nidever, Gillespie,
Weinberg, Blank, Henderson, Smee, Barkhouser, Harding, Hope,
Fitzgerald, Stolberg, Arns, Nelson, Brunner, Burton, Walker,
Lam, Maseman, Barr, Leger, Carey, MacDonald, Ebelke, Beland,
Horne, Young, Rieke, Rieke, O'Brien, Crane, Carr, Harrison,
Stoll, Vernieri, Shetrone, Allende-Prieto, Johnson, Frinchaboy,
Zasowski, Garcia Perez, Bizyaev, Cunha, Smith, Meszaros, Zhao,
Hayden, Chojnowski, Andrews, Loomis, Owen, Klaene, Brinkmann,
Stauffer, Long, Jordan, Holder, Cope, Naugle, Pfaffenberger,
Schlegel, Blanton, Muna, Weaver, Snedden, Pan, Brewington,
Malanushenko, Malanushenko, Simmons, Oravetz, Mahadevan, &
Halverson]Wilson2012
Wilson, J. C., Hearty, F., Skrutskie, M. F., et al. 2012, in ,
Vol. 8446, Ground-based and Airborne Instrumentation for Astronomy IV, 84460H
[Zasowski et al.(2013)Zasowski, Johnson, Frinchaboy,
Majewski, Nidever, Rocha Pinto, Girardi, Andrews, Chojnowski,
Cudworth, Jackson, Munn, Skrutskie, Beaton, Blake, Covey,
Deshpande, Epstein, Fabbian, Fleming, Garcia Hernandez, Herrero,
Mahadevan, Mészáros, Schultheis, Sellgren, Terrien, van
Saders, Allende Prieto, Bizyaev, Burton, Cunha, da Costa,
Hasselquist, Hearty, Holtzman, García Pérez, Maia,
O'Connell, O'Donnell, Pinsonneault, Santiago, Schiavon, Shetrone,
Smith, & Wilson]Zasowski2013
Zasowski, G., Johnson, J. A., Frinchaboy, P. M., et al. 2013, , 146,
81
|
http://arxiv.org/abs/1701.08064v2 | 20170127143728 | Directional Emission from Dielectric Leaky-Wave Nanoantennas | [
"Manuel Peter",
"Andre Hildebrandt",
"Christian Schlickriede",
"Kimia Gharib",
"Thomas Zentgraf",
"Jens Förstner",
"Stefan Linden"
] | physics.optics | [
"physics.optics"
] |
[
Directional Emission from Dielectric Leaky-Wave Nanoantennas
M. Peter,1 A. Hildebrandt,2 C. Schlickriede,3 K. Gharib,1 T. Zentgraf,3 J. Förstner,2 and S. Linden1,*
1University of Bonn, Physikalisches Institut, D-53115, Bonn, Germany
2University of Paderborn, Department of Electrical Engineering, D-33098
Paderborn, Germany
3University of Paderborn, Department of Physics, D-33098, Paderborn, Germany
*linden@physik.uni-bonn.de
An important source of innovation in nanophotonics is the idea to scale down known radio wave technologies to the optical regime. One thoroughly investigated example of this approach are metallic nanoantennas which employ plasmonic resonances to couple localized emitters to selected far-field modes. While metals can be treated as perfect conductors in the microwave regime, their response becomes Drude-like at optical frequencies. Thus, plasmonic nanoantennas are inherently lossy. Moreover, their resonant nature requires precise control of the antenna geometry. A promising way to circumvent these problems is the use of broadband nanoantennas made from low-loss dielectric materials. Here, we report on highly directional emission from hybrid dielectric leaky-wave nanoantennas made of Hafnium dioxide nanostructures deposited on a glass substrate. Colloidal semiconductor quantum dots deposited in the nanoantenna feed gap serve as a local light source. The emission patterns of hybrid nanoantennas with different sizes are measured by Fourier imaging. We find for all antenna sizes a highly directional emission, underlining the broadband operation of our design.
]
Nanoantennas have become valuable elements of the photonics toolbox to control and manipulate light on the nanoscale <cit.>. They allow for an efficient interconversion of localized excitations and propagating electromagnetic waves <cit.>. In receiving mode, nanoantennas can locally increase the light intensity by several orders of magnitude <cit.>. This property can be used for the efficient excitation of quantum emitters <cit.> and to boost nonlinear effects <cit.>.
In transmitting mode, coupling of quantum emitters to nanoantennas allows for the control of the emission properties<cit.>. For instance, Curto et al reported on a highly directional plasmonic Yagi-Uda antenna<cit.> and Lee et al. demonstrated a planar dielectric antenna with near unity collection efficiency<cit.>.
Like their microwave counterparts, nanoantennas can be categorized based on their functional principle into two large groups: (i) resonant antennas and (ii) nonresonant traveling wave antennas. So far, most research has focused on resonant nanoantennas based either on plasmonic resonances in metals <cit.> or on Mie resonances in high-refractive index dielectrics<cit.>. The latter offer the prospect of reducing dissipative losses while still providing large resonant enhancements of the electromagnetic near field. A recent review on optically resonant dielectric nanoantennas can be found in reference <cit.>. Moreover, dielectric antennas have been used in dielectric gradient metasurfaces as scattering elements <cit.>. In contrast to this, traveling wave antennas operating at optical frequencies have been studied considerably less. However, there is a growing interest in transferring the traveling wave concept to higher operating frequencies in order to achieve non-resonant broadband operation <cit.>.
Leaky-wave antennas are a subset of traveling wave antennas that emit radiation over the whole length of a non-resonant guiding structure supporting the traveling wave <cit.>. In the case of a leaky-wave antenna with uniform cross-section, the phase velocity of the guided wave has to be larger than the velocity of light in the medium, into which the wave is radiated. The beam direction θ _beam measured from the optical axis can be estimated (see Fig. <ref>a) by sin( θ _beam)=β/k_g, where β is the propagation constant of the leaky mode for the given cross-section and k_g the wave number in the medium.
The propagation constant β, and, hence, the beam direction can be controlled by designing the cross-section of the guiding structure. The finite length of the wave guide as well as the radiation losses give rise to side lobes. The complete radiation pattern in the far field can be obtained by solving the Fraunhofer diffraction integral of the aperture distribution <cit.>.
In this letter, we report on a hybrid, dielectric, leaky-wave wave antenna for optical frequencies with high directivity. Our antenna design<cit.> consists of only two simple dielectric building blocks deposited on a glass substrate. The total length of the two dielectric building blocks is approximately three times the free space operation wavelength. The design can be easily adapted to various low-loss dielectric materials. Moreover, its non-resonant nature makes our antenna design inherently robust against fabrication imperfections and guarantees broad-band operation.
The leaky-wave antennas consist of Hafnium dioxide (HfO_2) nanostructures deposited on a microscope cover slip and use colloidal semiconductor quantum dots (CdSeTe) as fluorescent elements. HfO_2 has been chosen as the dielectric material for the antennas because it combines a relatively large refractive index <cit.> (n=1.9) with very small absorption losses<cit.> at the emission wavelength of the quantum dots.
The fabrication scheme is shown in Fig. <ref> and starts by performing electron beam lithography (EBL) on a standard microscope cover glass with refractive index n_g=1.52:
The sample substrate is a microscope cover glass coated with an 8-nm thick layer of Indium Tin oxide (ITO) to provide sufficient conductivity for the EBL process. As a lithography resist we use a double layer of Poly(methyl methacrylate) (PMMA) spin coated onto the substrate. The sublayers consist of 260 nm PMMA with 600 k molar mass and 200 nm PMMA with 950 k molar mass. The geometries of the antennas are written with a standard EBL system. A 180 nm thick film of HfO_2 is evaporated by electron-beam evaporation. During the deposition the temperature of the sample is kept under [100]^∘ C. The PMMA template and the residual HfO_2 is removed in a lift-off process, where the sample is submerged in [60]^∘ C warm N-Methyl-2-pyrrolidone (NMP) for [3.5]h .
An electron micrograph of one of our dielectric antennas is shown in Fig. <ref>b. It consists of two [180]nm thick HfO_2 elements: The reflector has a footprint of [180]nm×[785]nm and the director of [2200]nm×[600]nm. They are separated by a [260]nm wide feed gap.
Colloidal semiconductor quantum dots (CdSeTe quantum dots, ZnS shell, Qdot 800 Carboxyl Quantum Dots, Thermo Fisher Scientific) with an emission wavelength of λ_QD=[780]nm are used as the feed element of the hybrid dielectric nanoantenna.
The quantum dots (QDs) are coated with a polymer providing carboxyl surface groups and come in an pH buffered aqueous solution.
They are precisely deposited from an aqueous solution into the feed gap of the antenna with the help of a second lithography step:
The antenna sample is coated with a new PMMA layer and a second EBL step is applied.
The [150]nm×[150]nm large area centered in the feed gap of each antenna, where QDs shall be deposited, is defined by exposure with the electron beam.
After development, the PMMA film with the holes serves as a template for the subsequent surface functionalization. For this purpose, the sample is placed for an hour in a solution of 10 % (3-Aminopropyl)triethoxysilane (APTES) in isopropyl alcohol to silanize the ITO layer in the unveiled patches. Next, 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC) is added to the QD solution and the substrate is immersed for two hours with constant stirring in this solution. EDC acts as an activating agent that mediates the link between the carboxyl surface groups of the QDs and the silanized substrate. After rinsing the substrate with deionised water, the PMMA mask is finally removed in a second lift-off process and the QDs stick to the modified surface areas in the feed gaps of the antennas.
The operating principle of the hybrid nanoantenna is shown in Fig. <ref>a and can be qualitatively understood as follows: The fluorescence of the quantum dots excites a leaky mode in the director by end-fire coupling. Light propagating along the director is continuously coupled to radiating modes in the substrate and emitted into the glass under an angle sin( θ _beam)=β/k_g relative to the substrate normal, i.e., the optical axis.
Hence, when designing the antenna for a specific emission angle θ _beam, one has to consider both the geometry and the refractive index of the director as well as the refractive index of the substrate.
Obviously, the condition β < k_g must be met since otherwise the director acts as a simple ridge waveguide and the leaky wave emission is prohibited.
Emission into the air is prohibited since the phase velocity of the guided wave is smaller than that of light in air. To increase the gain of the antenna, the reflector redirects fluorescence emitted in the backward direction.
This qualitative explanation can serve as a starting point for designing an antenna with a specific emission angle θ _beam.
In the first step, the director cross section is chosen such that the leaky mode in the director has the appropriate propagation constant β. For this purpose, it is sufficient to perform a modal analysis with a 2D eigenvalue solver.
Next, 3D numerical calculations are used to iteratively improve the directivity of the antenna by varying the other geometry parameters, i.e., the length of the director, the size and position of the reflector.
In the optical experiments, a blue pump laser (λ=[450]nm) is focused by a high-numerical-aperture objective (100 × magnification, NA=1.49) through the substrate onto a single antenna to excite the quantum dots in the feed gap.
The fluorescence emitted by this hybrid antenna is collected with the same objective and separated from reflected pump light by a dichroic mirror and a series of optical filters.
For our aplanatic objective lens, the spatial intensity distribution in the back-focal plane of the objective is related to the angular distribution of the collected light by the sine condition. A lens creates a real image of the back-focal plane on a scientific complementary metal-oxide-semiconductor (sCMOS) camera.
Thus, an emission angle θ, measured with respect to the optical axis, corresponds to a distance ρ=√(x_cam^2+y_cam^2) from the optical image center:
ρ=κsin(θ) ,
where κ is the conversion factor.
The biggest angle θ_NA that can be still collected with the objective and hence be observed on the camera corresponds to the radius of a ring with ρ_NA=κNA/n_g.
This relation is used with the known NA to determine the conversion factor κ.
Antenna data is commonly represented in spherical coordinates (θ, φ), where θ is the polar and φ the azimuthal angle.
In our analysis, we choose the orientation such that the antenna axis points in the (θ=90^∘, φ=0^∘) direction and the optical axis corresponds to the (θ=0^∘) direction. The data is presented in this letter with a linear θ axis and not in the pseudo momentum space of the camera chip.
So, the transformation from the x_cam and y_cam coordinates of the camera chip to spherical coordinates reads:
θ = arcsin(√(x_cam^2+y_cam^2)/κ),
φ = arctan( y_cam/x_cam) .
To investigate the polarization of the antenna signal, we place a linear polarizer as an analyzer in front of the camera.
Figure <ref>a depicts the normalized angular intensity distribution emitted by the hybrid dielectric nanoantenna shown above. Here, the analyzer axis is set perpendicular to the antenna axis, i.e., we record the emission of a TE-polarized leaky mode (see inset).
The hybrid antenna shows a highly directional emission with a strong main lobe at (θ_max=70^∘, φ_max=0^∘).
This lobe has a full width at half maximum of Δθ_max=(9 ± 2 )^∘ and Δφ_max=(24 ± 4 )^∘. About 6 % of the total collected intensity is confined in the main lobe. Additional concentric side lobes around the main lobes are visible.
A reference measurement (not shown) with a bare quantum dot sample indicates that the weak circular feature at θ≈θ_c=41.1^∘ can be attributed to uncoupled quantum dots, which preferentially emit at the critical angle between air and glass <cit.>.
The directivity D of an antenna is defined as the ratio of the peak intensity and the intensity averaged over all directions as observed in the far field <cit.>.
The collection angle in our experiment is limited by the NA of the microscope objective, i.e., light emitted by an angle larger than θ_NA=79^∘ is not detected.
As a result, a part of the intensity distribution is cut off.
With this restriction in mind, the directivity of the antenna over the measured part of the distribution can be estimated to be D=[12.5]dB.
We additionally use the front-to-back ratio (F/B), defined <cit.> as the intensity ratio between the maximum at (θ_max, φ_max) and the opposing point (θ_max, φ_max+180^∘), to quantify the directional performance of our hybrid antenna.
The F/B value of the dielectric nanoantenna measured here is [12]dB.
The angular intensity distribution for the analyzer axis parallel to the antenna is shown in Fig. <ref>b. It is normalized to the same value as the data discussed above. The peak intensity as well as the directivity (D=[9]dB) are in this case smaller than that recorded for the perpendicular analyzer setting (compare Fig. <ref>a and b).
A plausible explanation for these observations is that the coupling of the quantum dots to the TM-polarized leaky mode is less efficient. This interpretation is consistent with numerical calculations.
To support our experimental findings, numerical calculations based on finite integration technique (FIT) using CST Microwave Studio were performed <cit.>.
A single dipole in the feed gap served as the fluorescent element and three different perpendicular dipole orientations along the coordinate axes (see Fig.<ref>a) were assumed in successive calculations.
For each dipole orientation, the far-field intensities for the two analyzer settings used in the experiments were evaluated separately.
Finally, the intensities of the three dipole-orientations are summed for each analyzer setting.
With this procedure, we simulate the ensemble of QDs with random dipole orientations as used in the experiment.
The calculated intensity distributions for both analyzer settings are shown in Fig. <ref>c and <ref>d.
They feature the same main and side lobes as the experimental data.
The corresponding directivities for the analyzer in the y- and x-direction are D=[14.05]dB and in D=[8.347]dB, respectively.
A detailed analysis of the different dipole orientations shows that there are two main contributions to the main lobes: (i) the dipole oriented along the y-direction couples primarily to the TE leaky mode and (ii) the dipole oriented along the z-direction predominately excites the TM leaky mode.
A comparison of these two cases shows that the coupling efficiency to the TM mode is smaller, resulting in a lower directivity for light polarized along the antenna axis.
To assess the overall performance of the dielectric antenna, we have performed a more detailed analysis of the calculated angular intensity distribution. The overall emitted intensity, I_total, is obtained by integrated the far field intensity over the full 4π solid angle. The intensity collected by the microscope objective, I_NA, is calculated by integrating the intensity within the collection angle of the objective, i.e., θ≤θ_NA. For the dipole oriented along the x-, the y- and the z-direction, we find that the ratio I_NA/I_total is 73 %, 80 % and 91 %, respectively. These values suggest that a large fraction of the overall intensity is indeed observed in the experiment. A corresponding analysis shows that the main lobe contains 12 % of the intensity collected by the microscope objective, I_NA (averaged over the three dipole orientations). This value is approximately twice as large as the measured value. We attribute this to the fact that in the experiment not all QDs couple ideally to the antenna mode due to a displacement from the optimized position assumed in the calculations. Moreover, the electron micrograph of the antenna reveals (see Fig. <ref>b) that the sidewalls of the director are not perfectly smooth. Hence, scattering from this surface roughness constitutes another loss channel for our antennas.
From the calculated field distributions, we determined<cit.> the Purcell factor to be F_P=1.02.
A Purcell factor of order unity is not unexpected for the considered antenna design. First, the antenna is broadband (see below), i.e., it has a small quality factor. Secondly, the electromagnetic near field is not confined to a deep sub-wavelength volume. Moreover, the directional emission requires destructive interference for one-half space which tends to reduce the Purcell factor<cit.>.
A main advantage of non-resonant antennas is their high bandwidth and robustness against fabrication imperfections.
Hence, we anticipate that the variation of the antenna dimensions will result in a different beam direction but not a total loss of the antenna's functionality.
We measured the angular intensity distribution of antennas whose lateral dimensions (footprints of the director and the reflector as well as the gap size) were scaled by a factor 0.8 × and 1.4 ×, respectively, relative to the original design. The height was not scaled. As anticipated, both antennas still show directional emission (see Fig. <ref>a and b) .
This behavior agrees well with the numerical calculations, which predict a plateau of high directivities for a broad range of the widths and lengths around the original design <cit.>.
For instance, a reduction of the gap size by 200 nm decreases the directivity by only 2 dB. Moreover, the directivity does not critically depend on the exact position of the dipole. Numerical calculations show that the directivity stays above 10 dB if the dipole is located within a 150 nm × 150 nm large rectangle which includes the optimal dipole position. This rectangle corresponds to the patch defined in the second lithography step in our sample fabrication process.
To further substantiate the claim of broadband operation, additional numerical calculations where performed, in which we varied the operation frequency of the exciting dipole. All other parameters were kept fixed. The resulting directivity as a function of excitation wavelength is shown in Fig. <ref>a.
These calculations clearly support our claim: The directivity is larger than [10]dB in the range from [500]nm to [1200]nm wavelength.
Our design is not only robust against deviations of the fabricated geometry from the specifications but also tolerates variations of the refractive index of the reflector and the director without reoptimization of the geometry. Figure <ref>b depicts the calculated directivity vs refractive index of the two elements for a y-oriented dipole. The geometry parameters have not been changed. The directivity takes its maximum value for the refractive index n=1.9, i.e, the refractive index for which the antenna has been designed. Notably, the directivity stays above 10 dB even for considerable variations of n without accompanying optimization of the geometry.
It is instructive to compare the performance of our antenna with prominent previous work.
As stated above, the forward to backward ratio F/B of our dielectric nanoantenna is [12]dB. This value is quite competitive in comparison with highly directive plasmonic nanoantennas. For instance, in the seminal work on plasmonic Yagi-Uda antennas a F/B value of [6]dB has been reported<cit.>.
Another important parameter is the photon collection efficiency, which is defined as the fraction of the total emitted power that is captured by the used far-field collection optics.
Lee et al. <cit.> reported on a 96 % collection efficiency for a planar dielectric antenna and a NA=1.65 microscope objective. From the numerical calculations, we determine a collection efficiency of 81 % for our system (dielectric antenna and NA=1.49 microscope objective).
When comparing these values one should keep in mind that the two antennas give rise to quite different angular emission patterns. The emission of the planar dielectric antenna is evenly distributed over a ring centered around the optical axis while our hybrid dielectric antenna features a single pronounced main lobe. Moreover, our antenna selectively couples to emitters in the feed gap while in the case of the planar dielectric antenna the lateral position of the emitter is non-critical. Whether this is an advantage or a disadvantage depends on the respective experiment.
Another promising approach to achieve efficient broadband emission into a free-space beam is to directly embed the emitters either into a microcavity <cit.> or into a high dielectric taper structure <cit.>. Unfortunately, this approach is not compatible with all types of quantum emitters, e.g., it is not straightforward to incorporate dye molecules or nanocrystals in these epitaxially grown structures. For such emitters, our dielectric antenna might be a promising alternative.
In conclusion, we have fabricated and characterized hybrid dielectric nanoantennas for the optical regime.
The antennas exhibit highly directional emission.
Experiments with different antenna sizes indicate the broadband operation of our nanoantenna design.
These characteristics make the hybrid antenna a promising candidate for future applications. We envision that the dielectric antenna in combination with a single quantum emitter may be used as a highly directional single-photon source without inherent losses. By placing the dielectric antenna into a liquid crystal cell, the beam direction can potentially be tuned electrically.
The authors declare no competing financial interest.
§ ACKNOWLEDGEMENTS
S.L. and M.P. acknowledge financial support through DFG TRR 185 and by the German Federal Ministry of Education and Research through the funding program Photonics Research Germany (project 13N14150). A.H., J.F., and T.Z. acknowledge financial support through DFG TRR 142 and DFG GRK 1464.
|
http://arxiv.org/abs/1701.07864v2 | 20170126201523 | Congruences for modular forms mod 2 and quaternionic $S$-ideal classes | [
"Kimball Martin"
] | math.NT | [
"math.NT"
] | |
http://arxiv.org/abs/1701.07475v2 | 20170125202714 | Projected Primal-Dual Gradient Flow of Augmented Lagrangian with Application to Distributed Maximization of the Algebraic Connectivity of a Network | [
"Han Zhang",
"Jieqiang Wei",
"Peng Yi",
"Xiaoming Hu"
] | math.OC | [
"math.OC"
] |
[footnoteinfo]This work is supported by China Scholarship Council.
KTH_M]Han Zhanghanzhang@kth.se,
KTH_A]Jieqiang Weijieqiang@kth.se,
Toronto]Peng Yipeng.yi@utoronto.ca,
KTH_M]Xiaoming Huhu@kth.se,
[KTH_M]Department of Mathematics, KTH Royal Institute of Technology, SE-100 44, Stockholm, Sweden
[KTH_A]Department of Automatic Control, KTH Royal Institute of Technology, SE-100 44, Stockholm, Sweden
[Toronto]Department of Electrical and Systems Engineering, Washington University in St. Louis, USA
Projected Dynamical Systems, semi-definite programming, distributed optimization
In this paper, a projected primal-dual gradient flow of augmented Lagrangian is presented to solve convex optimization problems that are not necessarily strictly convex.
The optimization variables are restricted by a convex set with computable projection operation on its tangent cone as well as equality constraints.
As a supplement of the analysis in <cit.>, we show that the projected dynamical system converges to one of the saddle points and hence finding an optimal solution.
Moreover, the problem of distributedly maximizing the algebraic connectivity of an undirected network by optimizing the port gains of each nodes (base stations) is considered. The original semi-definite programming (SDP) problem is relaxed into a nonlinear programming (NP) problem that will be solved by the aforementioned projected dynamical system. Numerical examples show the convergence of the aforementioned algorithm to one of the optimal solutions. The effect of the relaxation is illustrated empirically with numerical examples. A methodology is presented so that the number of iterations needed to reach the equilibrium is suppressed. Complexity per iteration of the algorithm is illustrated with numerical examples.
§ INTRODUCTION
Recently, due to the fact of rising interest in distributed optimization, there has been a number of advances in the field of optimization that involves using continuous time dynamical systems to solve smooth convex problems, such as <cit.>, <cit.>, <cit.>, etc, as well as nonsmooth convex problems <cit.>.
Though it seems to be quite different from the classical iterative optimization methods, it has been found that classical methods such as steepest descent and proximal point method are nothing but forward Euler and backward Euler discretization of the same saddle point dynamics <cit.>. On the other hand, continuous-time algorithms enable many powerful mathematical tools to analyze the convergence of the algorithms. Hence studying continuous-time algorithms provides us with more insights about the theory behind some iterative algorithms as well as more ideas of designing them.
When solving a convex minimization problem with strong duality, it is well-known that the optimal solution is the saddle point of the Lagrangian. Hence
it is natural to consider the gradient flow of Lagrangians (also known as saddle point dynamics) where the primal variable follows the negative gradient flow while the dual variable follows the gradient flow.
Gradient flow of Lagrangians is first studied by <cit.>, <cit.> and has been revisited by <cit.>.
<cit.> studies the case of strictly convex problems and provides methodologies to transform non-strictly convex problems to strictly convex problems to fit the framework. The convergence is shown by employing the invariance principle for hybrid automata. <cit.> studies the same strictly convex problem from the perspective of projected dynamical systems and is able to show the convergence by a LaSalle-like invariant principle for Carathéodory solutions. Instead of considering discontinuous dynamics, <cit.> proposes a smooth vector field for seeking the saddle points of strictly convex problems. <cit.> considers a strictly convex problem with equality constraints and with inequality constraints respectively. Saddle point dynamics is also used therein, however, it is worth noticing that their problem is still strictly convex. When they consider the problem with inequality constraints, logarithmic barrier function is used. Though considering nonsmooth problems, <cit.> uses the projected saddle point dynamics of augmented Lagrangian whose equality constraint is the variable consensus constraint, and can be viewed as a special case of our problem.
Instead of using the continuous-time saddle point dynamics, an iterative distributed augmented Lagrangian method is developed in <cit.>. In a recent work <cit.> and its conference version <cit.>, the authors consider the nonsmooth case of projected saddle point dynamics and the dynamics are the same as the ones in the current paper when the objective function is smooth.
In this paper, we will focus on maximizing network algebraic connectivity distributedly. In <cit.>, the authors maximize the algebraic connectivity of a mobile robot network distributedly. The authors use first-order Taylor expansion to approximate the original non-convex problem and get a convex problem. A more general linear dynamics are considered and a two-step algorithm is proposed to solve the problem distributedly. It is shown in <cit.> that the algebraic connectivity is monotonically increasing with the algorithm, while the convergence to one optimal solution is not explicitly given. <cit.>,<cit.> and <cit.> focus on assuring the connectivity distributedly, while the algebraic connectivity maximization is not considered.
The main contribution of this paper is as follows. As a supplement to <cit.> and its conference version <cit.>, we propose a novel analysis line regarding the convergence of the dynamical system to reach comparable results.
Moreover, the problem of distributedly maximizing the algebraic connectivity of an undirected network by adjusting the “port gains" of each nodes (base stations) is considered. It is worth noticing that the problem motivates from a physical system and the goal is to enable each base station to compute its own optimal port gains only using its neighbours’
information, the total number of nodes N and the information belonging to itself; one can not “design” the communication network according to the structure of the problem or the algorithm. (For example, <cit.>).
We solve the original problem, which is an SDP, by relaxing it into an NP problem. The NP problem is not strictly convex, hence we adapt the projected saddle point dynamics method proposed in this work to solve the aforementioned NP problem. Numerical examples show that the aforementioned algorithm converges to one of the optimal solutions.
§ PRELIMINARIES AND NOTATIONS
We denote 1=11^T as an N dimensional all-one matrix, where 1 is an N dimensional all-one vector. The element located on the ith row and and jth column of a matrix A is denoted as [A]_ij. If matrix A_1-A_2 is positive semi-definite, then it will be denoted as A_1≽ A_2. We use · to denote 2-norm of vectors. |S| denotes the cardinality of set S. And any notation with the superscript * is denoted as the optimal solution to the corresponding optimization problem. tr(·) denotes the trace of a matrix. ⟨·,·⟩_2 is denoted as the inner-product in Euclidean space and ⟨ A_1,A_2⟩_M=tr(A_1A_2) denotes the inner-product in 𝒮^n, which is the Hilbert space of n× n symmetric matrix.
Assume K⊂ℝ^n is a closed and convex set, the projection of a point x to the set K is defined as P_K(x)=min_y∈ Kx-y. For x∈ K, v∈ℝ^n, the projection of the vector v at x with respect to K is defined as: ( see <cit.>,<cit.>) Π_K(x,v)=lim_δ→ 0P_K(x+δ v)-x/δ=P_T_K(x)(v), where T_K(x) denotes the tangent cone of K at x. The interior, the boundary and the closure of K is denoted as int(K), ∂ K and cl(K), respectively.
The set of inward normals of K at x is defined as n(x)={γ | γ=1,⟨γ,x-y⟩_2≤ 0,∀ y∈ K }, and Π_K(x,v) fulfills the following lemma:
If x∈ int(K), then Π_K(x,v)=v; if x∈∂ K, then Π_K(x,v)=v+β(x)n^*(x), where n^*(x)=max_n∈ n(x)⟨ v,-n⟩ and β(x)=max{0,⟨ v,-n^*(x)⟩}.
Let F be a vector field such that F:K↦ℝ^n, the projected dynamical system is given by ẋ=Π_K(x,F(x)).
Note that the right hand side of above dynamics can be discontinuous on the ∂ K. Hence given an initial value x_0∈ K, the system does not necessarily have a classical solution. However, if F(x) is Lipschitz continuous, then it has a unique Carathéodory solution that continuously depends on the initial value <cit.>.
Now we introduce some basic knowledge about SDP. An SDP problem in standard form can be expressed as
maximize ∑_i=1^m c_ix_i
subjected to ∑_i=1^mx_iA_i≼ S,
where A_i,S∈𝒮^N.
Introducing the Lagrangian multiplier Φ≽ 0, the Lagrangian function of (<ref>) can be written as
ℒ(x,Φ)=-∑_i=1^m c_ix_i+tr(Φ(∑_i=1^m x_iA_i-S))
Taking the gradient of the Lagrangian function, together with the constraint and the complementarity slack condition, we get its KKT conditions
⟨ A_i,Φ⟩_M=-c_i, ∑_i=1^mx_iA_i≼ S,
Φ≽ 0, ⟨∑_i=1^mx_iA_i-S,Φ⟩_M=0.
§ PROBLEM FORMULATION AND PROJECTED SADDLE POINT DYNAMICS
In this section, we consider the following optimization problem defined on ℝ^n:
x∈ Kminimize f(x)
subject to Ax-b=0,
where f:ℝ^n↦ℝ and A∈ℝ^m× n. K is a convex set such that calculating the projection on its tangent cone is computationally cheap. f(x) is a convex function but not necessarily strictly convex. It is also assumed that the gradient of f(x) is locally Lipschitz continuous and the Slater's condition holds for (<ref>). Hence strong duality holds for (<ref>).
The Lagrangian ℒ:K×ℝ^m↦ℝ for the problem (<ref>) is given by
ℒ(x,v)=f(x)+v^T(Ax-b),
where v∈ℝ^m is the Lagrangian multiplier of the constraint Ax-b=0. Since strong duality holds for (<ref>), then (x^*,v^*) is a saddle point of ℒ(x,v) if and only if x^* is an optimal solution to (<ref>) and v^* is optimal solution to its dual problem.
The augmented Lagrangian ℒ_𝒜:K×ℝ^m↦ℝ for (<ref>) is given by ℒ_𝒜(x,v)=f(x)+v^T(Ax-b)+ρ/2(Ax-b)^T(Ax-b), where ρ>0 is the damping parameter that will help to suppress the oscillation of x during optimization algorithms. Without loss of generality, we choose ρ=1.
We propose to find the saddle point of (<ref>) via the saddle point dynamics projected on the set K, i.e.,
ẋ =Π_K(x,-∇ f(x)-A^Tv-A^T(Ax-b))
=Π_K(x,-∂ℒ_𝒜(x,v)/∂ x),
v̇ =Ax-b=∂ℒ_𝒜(x,v)/∂ v.
Note that it is assumed that ∇ f(x) is locally Lipschitz continuous, therefore there is a unique Carathéodory solution for the dynamics (<ref>).
Indeed, one can choose to project the negative gradient flow of the objective function onto the entire feasible set. However, this increases the computational complexity since it adds more constraints while computing the projection. Recall that we assume that calculating the projection to T_K(x) is computationally cheap, primal-dual gradient flow is used.
§ CONVERGENCE ANALYSIS
In this section, we analyse the convergence for (<ref>) and start with the analysis of the equilibrium point of (<ref>). <cit.> consider the nonsmooth case of projected saddle point dynamics and the dynamics are the same as the ones in the current paper when the objective function is smooth. As a supplement, we propose a novel analysis line regarding the stability of the dynamical system to reach comparable results.
(x^*,v^*) is a saddle point to (<ref>) if and only if it is an equilibrium of (<ref>).
Since strong duality holds for (<ref>), the optimality conditions become necessary and sufficient conditions. The optimality condition for (<ref>) is given by -∇ f(x^*)-A^Tv^*∈ N_K(x^*), Ax^*-b=0, <cit.>, which implies -∇ f(x^*)-A^Tv^*+A^T(Ax^*-b)∈ N_K(x^*), where N_K(x^*) denotes the normal cone of K at x^*.
This implies Π_K(x^*,-∇ f(x^*)-A^Tv^*-A^T(Ax^*-b))=0, therefore, (x^*,v^*) is an equilibrium point of (<ref>). On the other hand, if (x^*,v^*) is an equilibrium point of (<ref>), it must have -∇ f(x^*)-A^Tv^*+A^T(Ax^*-b)∈ N_K(x^*) and Ax^*-b=0, which implies the optimality condition.
Assume γ is a solution to ẋ=f(x), where f:ℝ^n↦ℝ^n. The omega-limit set is defined as
Ω(γ)= {y∈ℝ^n|∃{t_k}_k=1^∞⊂[0,∞), lim_k→∞t_k=∞,
lim_k→∞γ(t_k)=y}
(LaSalle invariance principle for Carathéodory solutions) Let 𝒮∈ℝ^n be compact and invariant. Assume for each x_0∈𝒮, there exists a unique solution for ẋ=f(x) starting at x_0 and its omega-limit set is invariant. Let V:ℝ^n↦ℝ be a continuously differentiable map such that ℒ_fV(x)≤ 0, where ℒ_f=∂ V/∂ xf(x) denotes the Lie derivative along the vector field f(x). Then any solution of ẋ=f(x) starting at 𝒮 converges to the largest invariant set in cl({x∈𝒮|ℒ_fV(x)=0}).
Given an initial value (x(0),v(0)), where x(0)∈ K, the trajectory of the projected dynamical system (<ref>) asymptotically converges to one of the saddle points of (<ref>).
We use LaSalle invariance principle for Carathéodory solutions <cit.> to prove the proposition.
Suppose (x^*,v^*) is a saddle point of the Lagrangian (<ref>), namely, x^* is the optimal solution of (<ref>) and v^* is the optimal solution of its dual problem. Construct the following Lyapunov function
d(x,v)=1/2(x-x^*^2+v-v^*^2).
Note that d(x,v) is continuously differentiable and denote the right hand side of the dynamics (<ref>) as vector field F. The Lie derivative along the vector field f(x) of a function V(x) is defined as ℒ_fV(x)=∂ V(x)/∂ xf(x).
By the definition of saddle points, ℒ(x^*,v)≤ℒ(x^*,v^*)≤ℒ(x,v^*). The Lie derivative of d(x,v) along the vector field F is given by ℒ_Fd(x,v)=(x-x^*)^TΠ_K(x,-∇ f(x)-A^Tv-A^T(Ax-b))+(v-v^*)^T(Ax-b).
By Lemma <ref>, it holds that Π_K(x,-∇ f(x)-A^Tv-A^T(Ax-b))=-∇ f(x)-A^Tv-A^T(Ax-b)+β(x)n^*(x), where β(x)≥ 0 and ⟨ n^*(x),x-y⟩≤ 0, ∀ y∈ K. Since Ax^*-b=0, hence it follows that ℒ_Fd(x,v)=(x-x^*)^T(-∇ f(x)-A^Tv-A^T(Ax-b)+β(x)n^*(x))+(v-v^*)^T(Ax-b)≤ (x-x^*)^T(-∇ f(x)-A^Tv-A^T(Ax-b))+(v-v^*)^T(Ax-b)=-(x-x^*)^T∂ℒ(x,v)/∂ x-(Ax-b)^T(Ax-b)+(v-v^*)^T ∂ℒ(x,v)/∂ v.
Since ℒ(x,v) is convex with respect to x and concave with respect to v, it follows from the first order property of convex and concave function <cit.> that ℒ_Fd(x,v)≤ℒ(x^*,v)-ℒ(x,v^*)-(Ax-b)^T(Ax-b).
Since ℒ(x^*,v)≤ℒ(x,v^*), we have ℒ_Fd(x,v)≤ -(Ax-b)^T(Ax-b)≤ 0. Note that d(x,v) is convex and differentiable. By Proposition <ref> and using the result in <cit.>, we can conclude that any saddle point is Lyapunov stable. Recall that f(x) has a locally Lipschitz continuous gradient, hence the uniqueness of the Carathéodory solution to (<ref>) can be guaranteed.
Since d(x,v) is differentiable and by the definition of Carathéodory solution, x(t),v(t) are absolute continuous, d(x(t),v(t)) is differentiable almost everywhere with respect to t and d/dtd(x(t),v(t))=ℒ_Fd(x,v) holds almost everywhere on t≥ 0. Therefore d(x(t),v(t)) is continuous and non-increasing with respect to time.
Note that d(x,v) is radially unbounded, hence the set 𝒮̂={(x,v)∈ℝ^n×ℝ^m|d(x,v)≤ d(x(0),v(0))} is a compact invariant set for the system (<ref>).
The invariance of the ω-limit set <cit.> can be proved by using the same methodology as Lemma 4.1 in <cit.> which is based on the continuity and the uniqueness of the solution.
Now by LaSalle invariance principle for Carathéodory solutions <cit.>, the trajectory of (<ref>) converges to the largest invariant set in cl(Ω), where Ω={(x,v)∈𝒮̂|ℒ_Fd(x,v)=0}.
For (x^',v^')∈Ω, we have Ax^'-b=0 and ℒ(x^*,v^')=ℒ(x^',v^*)⇔ f(x^*)=f(x^')+v^*T(Ax^'-b)⇔ f(x^*)=f(x^').
Since x^* is an optimal solution to (<ref>), x^' is also an optimal solution to (<ref>).
Denote 𝒟={(x,v)∈𝒮̂|x } and note that Ω⊂𝒟. Hence cl(Ω)⊂ cl(𝒟).
On the other hand, since the set of optimal solutions for a convex optimization problem is closed, 𝒟 is also closed and hence cl(Ω)⊂ cl(𝒟)=𝒟.
Denote ℳ as the largest invariant set in cl(Ω).
Recall that (<ref>) is not necessarily a strictly convex problem; for the non-strictly convex case, since there might be infinite number of optimizers and they are not isolated, the trajectory of x(t) might still evolve in the optimal solution set. To show that the trajectory converges to a point, we need to show there that if a trajectory starts in ℳ, it will remain constant for all times; since the Carathéodory solution to the projected dynamics is unique and continuously depend on the initial value, it can be concluded from the above fact that the trajectory asymptotically converges to a point.
Assume the initial value (x̂(0),v̅(0))∈ℳ⊂ cl(Ω), then x̂(0) is also an optimal solution to (<ref>). Hence there exists some v̂ such that (x̂(0),v̂) is a saddle point of (<ref>). Then the Lyapunov function can be constructed similarly as d̂(x,v)=1/2(x-x̂(0)^2+v-v̂^2).
We have shown that for any arbitrarily chosen saddle point (x^*,v^*), d(x(t),v(t)) is non-increasing with respect to time; such statement also holds for d̂(x(t),v(t)).
Furthermore, since (x̂(0),v̅(0))∈ℳ and ℳ is invariant, (x̂(t),v̅(t))∈ℳ,∀ t≥ 0. This implies that x̂(t) is an optimizer for all t≥ 0 and hence we have v̇̅̇(t)=Ax̂(t)-b=0,∀ t≥ 0. And this implies v̅(t)=v̅(0),∀ t≥ 0 and hence d̂(x̂(t),v̅(t))≤d̂(x̂(0),v̅(0))=1/2v̅(0)-v̂^2. This implies x̂(t)-x̂(0)^2=0 and hence x(t)=x̂(0),∀ t≥ 0. Therefore, any trajectory that starts in ℳ remains constant for all times, i.e., any point in ℳ is an equilibrium point. By Proposition <ref>, we can conclude that they are saddle points of (<ref>).
This implies that the trajectory starts at (x̂(0),v̅(0)) can not be time-varying, namely, x̂(t) remains constant for all t>0. This is because d̂(x̂(0),v̅(0))=1/2v̅(0)-v̂^2 and d/dtv̅(0)=0, hence x̂(t) can not deviate from its initial value x̂(0), otherwise it would contradict the fact that d̂(x(t),v(t)) is non-increasing with respect to time. Therefore, any trajectory that starts in ℳ remains constant for all times, i.e., any point in ℳ is an equilibrium point. By Proposition <ref>, we can conclude that they are saddle points of (<ref>).
We have shown that given an initial value (x(0),v(0)), where x(0)∈ K, the trajectory of (<ref>) asymptotically converges to a set ℳ whose elements are saddle points of (<ref>). Now we show the trajectory asymptotically converges to a point in ℳ by contradiction. To abbreviate the notation, we denote η=(x^T,v^T)^T.
Suppose η(t) does not converge to a point in ℳ, namely, the trajectory's ω-limit set Γ(η(t)) is not a singleton. This means we can choose η̅_1,η̅_2∈Γ(η(t))⊂ℳ, such that η̅_1-η̅_2=ζ,ζ>0. Since η̅_1,η̅_2∈ℳ, η̅_1,η̅_2 are saddle points and we have shown that all saddle points are Lyapunov stable.
This means that there exists δ(ζ/2), such that if η(T)-η̅_1<δ(ζ/2), then η(t)-η̅_1<ζ/2,∀ t≥ T. Since η̅_1 is an ω-limit point in Γ(η(t)), there exists such T so that η(T)-η̅_1<δ(ζ/2), and hence the trajectory can never leave ζ/2-neighbourhood of η̅_1 after time instant T. But η̅_2 is also an ω-limit point, there must exists a sequence of point on the trajectory that tends to it. Hence we have a contradiction and (x(t),v(t)) converges to a saddle point in ℳ.
§ DISTRIBUTED ALGEBRAIC CONNECTIVITY MAXIMIZATION
In this section, we apply the aforementioned algorithm to maximize the algebraic connectivity of a network in a distributed manner. The problem is first formulated as a Semi-definite Programming (SDP) problem. With an equivalent formulation of the original SDP problem, the problem is relaxed into a Nonlinear Programming (NP) problem in order to apply the aforementioned projected saddle point dynamics.
§.§ Motivation and Modeling
The problem motivates from a physical communication network. Consider an undirected communication network 𝒢(𝒱,ℰ,𝒲) whose nodes i∈𝒱={1,2,⋯,N} are homogeneous base stations and can control their communication port gains w_k^(i)∈𝒲. (To abbreviate the notation, the edges are labelled with numbers.) The set of neighbours of node i is denoted as 𝒩(i). The set of edges (communication channel) adjacent to node i is denoted as ℰ(i) and ℰ=⋃_i∈𝒱ℰ(i).
As illustrated in Fig. <ref>, the communication gain (strength) on each link k∈ℰ is the sum of the port gains w_k^(i) and w_k^(j), (i,j)=k∈ℰ contributed by the two end nodes connected by the edge. It is assumed that each agent can only get access to the information of its neighbours as well as the information of itself. Our goal is to develop a method so that each base station can adjust its own port gains only according to its neighbours' information, the number of nodes N and the information belonging to itself, so that the algebraic connectivity of the total communication network is maximized.
The graph 𝒢(𝒱,ℰ,𝒲) we consider is undirected, and hence the weighted Laplacian matrix L_w is symmetric and can be expressed as
[L_w]_ij =
∑_lw_il i= j (i,l)∈ℰ
-w_ij i≠ j (i,j)∈ℰ
0
and L_w=∑_k∈ℰw_kE_k, where k is the label of the edges, and 0≤ w_k∈𝒲, ∀ k∈ℰ are the edge-weights. If node i and j are connected via edge k, then [E_k]_ii=[E_k]_jj=1, [E_k]_ij=[E_k]_ji=-1, and the other elements of E_k are zero. If the graph is connected, then the eigenvalues of L_w satisfy: 0=λ_1<λ_2≤⋯≤λ_N and λ_2 is the algebraic connectivity of 𝒢(𝒱,ℰ,𝒲). Let L=∑_k∈ℰE_k be the unweighted Laplacian matrix of the graph. We suppose L has only one zero eigenvalue, namely, the unweighted graph 𝒢(𝒱,ℰ) is connected.
It is assumed that the total amount of port gain that each base station can provide is fixed. Without loss of generality, we assume ∑_k∈ℰ(i)w_k^(i)=1. Note that this differs from the formulation in <cit.>, while they assume ∑_k∈ℰ w_k=1, and if written in the form of our model, ∑_i∈𝒱∑_k∈ℰ(i)w_k^(i)=1. In other words, it implies that the budget of the port gains in the entire network is a constant and the power can be allocated differently to each node. Hence each node is not homogeneous any more. Therefore, the algebraic connectivity maximization of an edge-weighted Laplacian matrix can be formulated as the following SDP problem:
λ_2,μ,{w_k^(i)}maximize λ_2
subject to λ_2I-μ1≼∑_i∈𝒱∑_k∈ℰ(i)w_k^(i)E_k,
∑_k∈ℰ(i)w_k^(i)=1,
w_k^(i)≥ 0, ∀ k∈ℰ, i∈𝒱.
PC
In (<ref>), the variable μ is used to shift the zero eigenvalue of L_w with its eigenvector 1. When the optimal value is reached, λ_2^* would be the smallest eigenvalue of ∑_i∈𝒱∑_k∈ℰ w_k^(i)*E_k+μ^*1. Since for any positive semi-definite matrix G, it holds that ξ I≼ G, where ξ is the smallest eigenvalue of G, we get the above constraints.
Moreover, since λ_2^* is the second smallest eigenvalue of ∑_i∈𝒱∑_k∈ℰ w_k^(i)*E_k, it is a continuous function with respect to {w_k^(i)*}; and {w_k^(i)*} lives in a compact set, hence the optimal value of (<ref>) can be attained. On the other hand, we can choose λ_2 small enough and μ large enough to make the first matrix inequality constraint strictly holds, hence strong duality holds for (<ref>).
§.§ Problem Equivalence
In order to solve (<ref>) distributedly, we consider the following problem
{λ_2^(i)},{μ^(i)},{w_k^(i)},{Z^(i)}maximize ∑_i∈𝒱λ_2^(i)
subject to λ_2^(i)I-μ^(i)1- ∑_k∈ℰ(i)w_k^(i)E_k
+∑_j∈𝒩(i)(Z^(i)-Z^(j))≼ 0,
∑_k∈ℰ(i)w_k^(i)=1,
w_k^(i)≥ 0, ∀ k∈ℰ(i), ∀ i∈𝒱,
PD
where Z^(i),∀ i∈𝒱 are symmetric matrices. They can be written as Z^(i)=∑_l=1^N(N+1)/2z^(i)_lB_l, where z^(i)_l∈ℝ is the matrix entry and B_l is the basis matrix for 𝒮^N. Both are labeled by l. To be more precise, if z_l^(i) is the entry located on pth row and qth column of Z^(i), then [B_l]_pq=[B_l]_qp=1 and other entries of B_l remain zeros. The purpose of the introduction of Z^(i) is to derive the consensus condition (<ref>) in the KKT conditions.
The next proposition describes the relationship between (<ref>) and (<ref>).
If {λ_2^(i)*, μ^(i)*,{w_k^(i)*}, Z^(i)*}, i∈𝒱 solves (<ref>), then λ_2^*=∑_i∈𝒱λ_2^(i)*, μ^*=∑_i∈𝒱μ^(i)*, {w_k^(i)*} solves (<ref>).
On the other hand, if there exists an optimal solution for (<ref>), then there also exists an optimal solution for (<ref>).
The KKT conditions of (<ref>) for all i∈𝒱 are:
tr(Φ^*)=1, Φ^*≽ 0, φ^(i)*_k≥ 0, tr(1Φ^*)=0,
tr(E_kΦ^*)-v^(i)*+φ_k^(i)*=0, w_k^(i)*≥ 0,
φ_k^(i)*w_k^(i)*=0, ∀ k∈ℰ(i),
λ_2^*-∑_i∈𝒱∑_k∈ℰ(i)w_k^(i)*tr(E_kΦ^*)=0,
λ_2^*I-μ^*1- ∑_i∈𝒱∑_k∈ℰ(i)w_k^(i)*E_k≼ 0,
∑_k∈ℰ(i)w_k^(i)*=1,
while the KKT conditions of (<ref>) for all i∈𝒱 reads
tr(Φ^(i)*)=1,Φ^(i)*≽ 0,φ^(i)*_k≥ 0,tr(1Φ^(i)*)=0,
tr(E_kΦ^(i)*)-v^(i)*+φ_k^(i)*=0, w_k^(i)*≥ 0,
φ_k^(i)*w_k^(i)*=0, ∀ k∈ℰ(i),
λ_2^(i)*-∑_k∈ℰ(i)w_k^(i)*tr(E_kΦ^(i)*)
+∑_j∈𝒩(i)tr[(Z^(i)-Z^(j))Φ^(i)*]=0,
λ_2^(i)*I-μ^(i)*1- ∑_k∈ℰ(i)w_k^(i)*E_k
+∑_j∈𝒩(i)(Z^(i)*-Z^(j)*)≼ 0,
∑_k∈ℰ(i)w_k^(i)*=1, ∑_j∈𝒩(i)Φ^(i)*-Φ^(j)*=0,
where Φ^*, φ_k^(i)* and v^(i)* are the Lagrange multipliers corresponding to the matrix inequality constraint, inequality constraints and the equality constraint of (<ref>) respectively. Similarly, Φ^(i)*, φ_k^(i)* and v^(i)* are the Lagrange multipliers of (<ref>).
Since (<ref>) is convex, the KKT conditions (<ref>) becomes necessary and sufficient conditions for optimality. Hence {λ_2^(i)*, μ^(i)*,{w_k^(i)*}, Z^(i)*}, i∈𝒱 solves (<ref>), if and only if there exist Lagrange multipliers {Φ^(i)*}, {v^(i)*} and {φ^(i)*_k} such that (<ref>) holds.
Meanwhile, recall that {B_l} are basis matrices for 𝒮^N and since Φ^(i)*∈𝒮^N,∀ i∈𝒱, Φ^(i)* can be written as ∑_lϕ_l^(i)*B_l. Denote ϕ_l^*=[ϕ_l^(1)*,⋯,ϕ_l^(N)*]^T and (<ref>) can be written as Lϕ_l^*=0,∀ l, where L=∑_k∈ℰE_k is the unweighted Laplacian matrix. Since we assume that the graph is connected, then ϕ_l^*∈ ker(L)= span(1),∀ l and hence implies Φ^(i)*=Φ^(j)* for all i,j∈𝒱.
This means that (<ref>)-(<ref>) are the same as (<ref>)-(<ref>).
Further, by adding (<ref>) and (<ref>) for each node i∈𝒱, the terms tr[(Z^(i)-Z^(j))Φ^(i)] and Z^(i)*-Z^(j)* are cancelled. Denote λ_2^*=∑_i∈𝒱λ_2^(i)*, μ^*=∑_i∈𝒱μ^(i)*, we get (<ref>). Since (<ref>) is convex, then the KKT conditions are necessary and sufficient conditions for optimality. Hence the first part of the statement follows.
Now we show the second part of the statement. Suppose λ_2^*,μ^*,{w_k^(i)*} is an optimal solution to (<ref>), then there must exist Lagrange multipliers Φ^*, {v^(i)*} and {φ^(i)*_k} such that (<ref>) holds. Now choose {λ̂_2^(i)*,μ̂^(i)*,{ŵ_k^(i)*},Ẑ^(i)*} and Lagrange multipliers {Φ̂^(i)*,v̂^(i)*,{φ̂_k^(i)*}} such that ∑_i∈𝒱λ̂_2^(i)*=λ_2^*,∑_i∈𝒱μ̂^(i)*=μ^*,ŵ_k^(i)*=w_k^(i)* and Φ̂^(i)*=Φ^*, v̂^(i)*=v^(i)*, φ̂_k^(i)*=φ_k^(i)*,∀ k∈ℰ(i),∀ i∈𝒱. The KKT conditions (<ref>)-(<ref>) and (<ref>) is trivially satisfied by the above construction. What remains to show is that there exists such {Ẑ^(i)*} so that (<ref>) and (<ref>) are satisfied.
We first show there exists {Ẑ^(i)*} such that (<ref>) is satisfied. Denote Â^(i)=λ̂_2^(i)*I-μ̂^(i)*1-∑_k∈ℰ(i)ŵ_k^(i)*E_k. Since λ_2^*,μ^*,{w_k^(i)*} satisfy (<ref>) and λ_2^*=∑_i∈𝒱λ̂_2^(i)*, μ^*=∑_i∈𝒱μ̂^(i)*, ŵ_k^(i)*=w_k^(i)*, we know that ∑_i∈𝒱Â^(i)≼ 0. By choosing P^(i)=-Â^(i) for i=1,⋯,N-1 and P^(N)=∑_i=1^N-1Â^(i), we have Â^(i)+P^(i)= 0 for i=1,⋯ N-1 and Â^(N)+P^(N)=∑_i=1^NÂ^(i)≼ 0. Therefore, we have Â^(i)+P^(i)≼ 0 for all i∈𝒱 and ∑_i∈𝒱P^(i)=0. What remains to show is that there exists {Ẑ^(i)*}, such that P^(i)=∑_j∈𝒩(i)(Ẑ^(i)*-Ẑ^(j)*) for all i∈𝒱. Recall that P^(i)=∑_lp^(i)_lB_l, Ẑ^(i)*=∑_lẑ^(i)*_lB_l and denote p_l=[p_l^(1),⋯,p_l^(N)]^T, ẑ_l^*=[ẑ_l^(1)*,⋯,ẑ_l^(N)*]^T. Since ∑_i∈𝒱P^(i)=0, it follows that 1^Tp_l=0 for all l. Therefore, p_l∈ ker(1^T)=Im(L), where L=∑_k∈ℰE_k. This implies p_l can be expressed as Lẑ_l^* for some ẑ_l^*, namely, there exists {Ẑ^̂(̂î)̂} such that P^(i)=∑_j∈𝒩(j)(Ẑ^(i)*-Ẑ^(j)*) for all i∈𝒱 and hence (<ref>) is satisfied.
Now we show that {λ̂_2^(i)*,μ̂^(i)*,Ẑ^(i)*} and {Φ̂^(i)*} chosen above satisfy (<ref>). Since Φ̂^(i)*=Φ^*, ŵ_k^(i)*=w_k^(i)* and P^(i)=∑_j∈𝒩(i)(Ẑ^(i)*-Ẑ^(j)*), the left hand side of (<ref>) can be written as λ̂_2^(i)*-∑_k∈ℰ(i)w_k^(i)*tr(E_kΦ^(i)*)+tr(P^(i)Φ^*).
Recall that P^(i)=-Â^(i)=-λ̂_2^(i)*I+μ̂^(i)*1+∑_k∈ℰ(i)ŵ_k^(i)*E_k,i=1,⋯ N-1. In view of (<ref>), for i=1,⋯,N-1, we have λ̂_2^(i)*-∑_k∈ℰ(i)w_k^(i)*tr(E_kΦ^*)-λ̂_2^(i)*tr(Φ^*)+μ̂^(i)*tr(1Φ^*)+∑_k∈ℰ(i)w_k^(i)*tr(E_kΦ^*)=0.
For i=N, P^(N)=∑_i=1^N-1Â^(i)=∑_i=1^N-1λ̂_2^(i)*I-∑_i=1^N-1μ̂^(i)*1-∑_i=1^N-1∑_k∈ℰ(i)ŵ_k^(i)*E_k. In view of (<ref>), (<ref>) and since ∑_i∈𝒱λ̂_2^(i)*=λ_2^*,∑_i∈𝒱μ̂^(i)*=μ^*, we have λ̂_2^(N)-∑_k∈ℰ(N)w_k^(N)*tr(E_kΦ^*)+∑_i=1^N-1λ̂_2^(i)*tr(Φ^*)-∑_i=1^N-1μ̂^(i)*tr(1Φ^*)-∑_i=1^N-1∑_k∈ℰ(i)w_k^(i)*tr(E_kΦ^*)=0.
Therefore, by the variables construction above, the second part of the statement follows.Now we show the second part of the statement. Since λ̂_2, μ̂ and {ŵ_k^(i)} is a feasible solution of (<ref>), it satisfies the constraints of (<ref>). Since the last two constraints on w_k^(i) is the same for both (<ref>) and (<ref>), what remains to show is that there exists {λ̂_2^(i),μ̂^(i),{ŵ_k^(i)},Ẑ^(i)},i∈𝒱 such that λ̂_2=∑_i∈𝒱λ̂_2^(i), μ̂=∑_i∈𝒱μ̂^(i) and λ̂_2^(i)I-μ̂^(i)1-∑_k∈ℰ(i)ŵ_k^(i)E_k+∑_j∈𝒩(i)(Ẑ^(i)-Ẑ^(j))≼ 0 for all i∈𝒱.
We prove this by showing the following equivalent statement: for all {λ̂_2^(i)} and {μ̂^(i)} that satisfy λ̂_2=∑_i∈𝒱λ̂_2^(i), μ̂=∑_i∈𝒱μ̂^(i), there always exists {Ẑ^(i)} such that λ̂_2^(i)I-μ̂^(i)1-∑_k∈ℰ(i)ŵ_k^(i)E_k+∑_j∈𝒩(i)(Ẑ^(i)-Ẑ^(j))≼ 0 holds for all i∈𝒱.
Denote Â^(i)=λ̂_2^(i)I-μ̂^(i)1-∑_k∈ℰ(i)ŵ_k^(i)E_k. From the feasibility of {λ̂_2,μ̂,{ŵ_k^(i)}} and λ̂_2=∑_i∈𝒱λ̂_2^(i), μ̂=∑_i∈𝒱μ̂^(i), we know that ∑_i∈𝒱Â^(i)≼ 0. By choosing P^(i)=-Â^(i) for i=1,⋯,N-1 and P^(N)=∑_i=1^N-1Â^(i), we have Â^(i)+P^(i)= 0 for i=1,⋯ N-1 and Â^(N)+P^(N)=∑_i=1^NÂ^(i)≼ 0. Therefore, we have Â^(i)+P^(i)≼ 0 for all i∈𝒱 and ∑_i∈𝒱P^(i)=0. What remains to show is that there exists {Ẑ^(i)}, such that P^(i)=∑_j∈𝒩(i)(Ẑ^(i)-Ẑ^(j)) for all i∈𝒱. Recall that P^(i)=∑_lp^(i)_lB_l, Ẑ^(i)=∑_lẑ^(i)_lB_l and denote p_l=[p_l^(1),⋯,p_l^(N)]^T, ẑ_l=[ẑ_l^(1),⋯,ẑ_l^(N)]^T. Since ∑_i∈𝒱P^(i)=0, it follows that 1^Tp_l=0 for all l. Therefore, p_l∈ ker(1^T)=Im(L), where L=∑_k∈ℰE_k. This implies p_l can be expressed as Lẑ_l for some ẑ_l, namely, there exists {Ẑ^̂(̂î)̂} such that P^(i)=∑_j∈𝒩(j)(Ẑ^(i)-Ẑ^(j)) for all i∈𝒱. Thus the second part of statement follows.
§.§ Relaxing SDP into NP
(<ref>) can be solved distributedly by using a similar method as <cit.> when the graph is regular.
However, here we would like to consider general graphs, not only regular ones.
In order to apply the projected saddle point dynamics to solve (<ref>), the problem needs to be relaxed into an NP first. This is because (<ref>) is still an SDP problem, and its inequality matrices constraints would lead to positive semidefinite matrix Lagrangian multipliers Φ^(i). This makes it hard to apply the saddle point dynamics in <cit.> to this problem since by the definition of the projection operator Π_K, it is clear that Π_K:ℝ^n×ℝ^n↦ℝ^n. The projected saddle point dynamics is not defined on the cone of positive semidefinite matrices.
Now we introduce the convex function proposed by <cit.> which can be used to approximate the largest eigenvalue of a symmetric matrix. Given X∈𝒮^N, function f_ε:𝒮^N↦ℝ and reads f_ε(X)=εln tr(e^X/ε)=εln[∑_i=1^Ne^λ_i(X)/ε] and its derivative with respect to X reads
∇_Xf_ε(X)=[∑_i=1^Ne^λ_i(X)/ε]^-1[∑_i=1^Ne^λ_i(X)/εu_iu_i^T],
where (λ_i(X),u_i) are eigen-pairs of X with u_i=1,∀ i.
It has been proved in <cit.> that
λ_max(X)≤ f_ε(X)≤λ_max(X)+εln N.
Hence when ε is sufficiently small, f_ε(X)≈λ_max(X).
Consider the following NP problem:
{μ^(i)},{w_k^(i)},{Z^(i)}minimize ∑_i∈𝒱f_ε_i(X^(i))
subject to ∑_k∈ℰ(i)w_k^(i)=1
w_k^(i)≥ 0, ∀ k∈ℰ(i), ∀ i∈𝒱NP
where X^(i)=-μ^(i)1-∑_k∈ℰ(i)w_k^(i)E_k+∑_j∈𝒩(i)(Z^(i)-Z^(j)) to abbreviate the notation.
It is clear that (<ref>) is a convex problem, the Slater's condition also holds for (<ref>) and hence strong duality holds. Therefore, KKT conditions becomes necessary and sufficient conditions for (<ref>).
The next proposition shows that how well the approximation could be.
Suppose ∑_i∈𝒱λ_2^(i)* is the optimal objective function value of (<ref>), then
-∑_i∈𝒱λ_2^(i)*≤∑_i∈𝒱f_ε_i(X^(i)*)≤-∑_i∈𝒱λ_2^(i)*+∑_i∈𝒱ε_iln N,
where X^(i)*=-μ^(i)*1-∑_k∈ℰ(i)w_k^(i)*E_k+∑_j∈𝒩(i)(Z^(i)*-Z^(j)*).
Moreover, suppose {μ̂^(i)*,{ŵ_k^(i)*},Ẑ^(i)*} is the optimal solution to (<ref>), then
-∑_i∈𝒱λ_2^(i)* ≤∑_i∈𝒱λ_max(X̂^(i)*)≤∑_i∈𝒱f_ε_i(X̂^(i)*)
≤ -∑_i∈𝒱λ_2^(i)*+∑_i∈𝒱ε_iln N,
where X̂^(i)*=-μ̂^(i)*1-∑_k∈ℰ(i)ŵ_k^(i)*E_k+∑_j∈𝒩(i)(Ẑ^(i)*-Ẑ^(j)*).
From the KKT condition (<ref>) of (<ref>), we know that λ_max(X^(i)*)≤ -λ_2^(i)*.
This implies
∑_i∈𝒱λ_max(X^(i)*)≤-∑_i∈𝒱λ_2^(i)*.
On the other hand, by Proposition <ref>, ∑_i∈𝒱X^(i)*=-∑_i∈𝒱μ^(i)*1-∑_i∈𝒱∑_k∈ℰ(i)w_k^(i)*E_k=-μ^*1-∑_i∈𝒱∑_k∈ℰ(i)w_k^(i)*E_k, where λ_2^*, μ^* and {w_k^(i)*} is the optimal solution to (<ref>).
We know from (<ref>) that -λ_2^*=λ_max(-μ^*1-∑_i∈𝒱∑_k∈ℰ(i)w_k^(i)*)=λ_max(∑_i∈𝒱X^(i)*) <cit.>. Hence by Proposition <ref>, we have
-∑_i∈𝒱λ_2^(i)*=-λ_2^*=λ_max(∑_i∈𝒱X^(i)*).
Then it follows from (<ref>) that ∑_i∈𝒱λ_max(X^(i)*)≤λ_max(∑_i∈𝒱X^(i)*). On the other hand, by eigenvalue inequality, we know that ∑_i∈𝒱λ_max(X^(i)*)≥λ_max(∑_i∈𝒱X^(i)*), and hence
∑_i∈𝒱λ_max(X^(i)*)=λ_max(∑_i∈𝒱X^(i)*).
Since λ_max(X^(i)*)≤ f_ε_i(X^(i)*)≤λ_max(X^(i)*)+ε_iln N, we have ∑_i∈𝒱λ_max(X^(i)*)≤∑_i∈𝒱f_ε_i(X^(i)*)≤∑_i∈𝒱λ_max(X^(i)*)+∑_i∈𝒱ε_iln N. Hence it follows from (<ref>) and (<ref>) that (<ref>) holds.
On the other hand, by (<ref>), it follows that λ_max(X̂^(i)*)≤ f_ε_i(X̂^(i)*) and hence ∑_i∈𝒱λ_max(X̂^(i)*)≤∑_i∈𝒱f_ε_i(X̂^(i)*).
Note that {-λ_max(X̂^(i)*),μ̂^(i)*,{ŵ_k^(i)*},Ẑ^(i)*} is also a feasible solution to (<ref>), then this implies -∑_i∈𝒱λ_2^(i)*≤∑_i∈𝒱λ_max(X̂^(i)*). In addition, since {μ̂^(i)*,{ŵ_k^(i)*},Ẑ^(i)*} is an optimal solution to (<ref>), it follows that ∑_i∈𝒱f_ε_i(X̂^(i)*)≤∑_i∈𝒱f_ε_i(X^(i)*). Moreover, using (<ref>), we have -∑_i∈𝒱λ_2^(i)*≤∑_i∈𝒱λ_max(X̂^(i)*)≤∑_i∈𝒱f_ε_i(X̂^(i)*)≤∑_i∈𝒱f_ε_i(X^(i)*)≤ -∑_i∈𝒱λ_2^(i)*+∑_i∈𝒱ε_iln N, which proves the statement.
Proposition <ref> shows that one can have a good approximation on ∑_i∈𝒱λ_2^(i) by choosing ε_i, ∀ i∈𝒱 sufficiently small. Without losing generality, we choose ε_i=ε, ∀ i∈𝒱. Note that (<ref>) explains the reason of the introduction of μ in (<ref>) instead of writing the constraint as λ_2(I-1/N1)≼∑_i∈𝒱∑_k∈ℰ(i)w_k^(i)E_k as in <cit.>. The λ_2I term is needed for the relaxation. It is worth noticing that μ^* does not necessarily equals to λ_2^*/N in (<ref>). In fact, any (λ_2^*,μ^*,{w_k^(i)*}) such that μ^*≥λ_2^*/N is an optimal solution for (<ref>). Also note that f_ε(X) is not a strictly convex function though it is convex. Indeed, for all 0≤α≤ 1, it holds that α f_ε(I)+(1-α)f_ε(2I)=αεln(Ne^1/ε)+(1-α)εln(Ne^2/ε)=εln N+2-α=εln(Ne^2-α/ε)=f_ε((2-α)I)=f_ε(α I+(1-α)2I).
Hence f_ε(X) is not strictly convex.
§.§ Projected Dynamics and Numerical Examples
Now we apply the projected dynmaical system to solve (<ref>). To abbreviate the notation, denote x=[x^(1)T,⋯,x^(N)T]^T, where x^(i)=[μ^(i),{w^(i)_k},{z^(i)_l}]^T and v=[v^(1),⋯,v^(N)]^T.
The projected dynamics for each agent i is given by
μ̇^(i) =⟨∇_X^(i)f_ε(X^(i)),1⟩_M=-∂ℒ_𝒜(x,v)/∂μ^(i),
ẇ_k^(i) =Π_ℝ_+(w_k^(i),⟨∇_X^(i)f_ε(X^(i)),E_k⟩_M-v^(i)
-(∑_p∈ℰ(i)w_p^(i)-1))
=Π_ℝ_+(w_k^(i),-∂ℒ_𝒜(x,v)/∂ w_k^(i)), ∀ k∈ℰ(i),
ż^(i)_l =-∑_j∈𝒩(i)⟨∇_X^(i)f_ε(X^(i))-∇_X^(j)f_ε(X^(j)),B_l⟩_M
=-∂ℒ_𝒜(x,v)/∂ z_l^(i),
v̇^(i) =∑_k∈ℰ(i)w_k^(i)-1=∂ℒ_𝒜(x,v)/∂ v^(i),
where ℒ_𝒜(x,v)=∑_i∈𝒱{f_ε(X^(i))+v^(i)(∑_k∈ℰ(i)w_k^(i)-1)+1/2(∑_k∈ℰ(i)w_k^(i)-1)^2} and ∇_X^(i)f_ε(X^(i)) is given by (<ref>).
Note that in (<ref>), (<ref>), every agent only uses the information that belongs to its neighbours as well as to itself. The exchanging information for each agent is the gradient ∇_X^(i)f_ε(X^(i)). We would like to remark that, although each time step each agent has to communicate with its neighbour a vector of size N(N+1)/2, this is the price to pay in order to solve the problem distributedly. This is because of the “dense” structure of the problem (since we do not make special assumptions on the graph topology) and the constraint that the communication network is the physical network itself. The reasons above have made it hard to decompose the problem into small scales.
The system (<ref>), (<ref>) is well-defined and the trajectory asymptotically converges to one of the saddle points of (<ref>) for all initial values μ^(i)(0),z_l^(i)(0),v^(i)(0) ∈ℝ, w_k^(i)(0)∈ℝ_+.
f_ε(X) has a Lipschitz continuous gradient with respect to x given that X=∑_ix_iA_i, where all A_i are symmetric matrices <cit.>. By Theorem 2.5 in <cit.>, for any initial value μ^(i)(0)∈ℝ, w_k^(i)(0)∈ℝ_+, v^(i)(0)∈ℝ and z_l^(i)(0)∈ℝ, there exists a unique Carathéodory solution which continuously depends on the initial value. Therefore the system (<ref>), (<ref>) is well-defined and by Proposition <ref>, the system (<ref>), (<ref>) asymptotically converges to one of the saddle points of (<ref>).
It seems that when simulating the projected dynamics (<ref>),(<ref>), one has to do eigenvalue decomposition on X^(i) to compute ∇_X^(i) f(X^(i)) at each time step. However, since the factors e^λ_i(X^(i))/ε decrease very rapidly, the gradient numerically only depends on few largest eigenvalues and correpondant eigenvectors <cit.>.
Extreme eigenvalues will converge first in numerical methods such as Arnoldi scheme, hence one does not have to do the entire eigenvalue decomposition and the numerical complexity is reduced.
By Proposition <ref>, one can first choose an ε and get an “optimal” algebraic connectivity under the current choice of ε.
If the approximation error ε Nln N compared to the “optimal” algebraic connectivity under the current choice of ε is not satisfying (for example, ε Nln N is approximately 10% of the current “optimal” algebraic connectivity), one can decrease ε until the desired relative error is achieved.
We run a simple numerical example using the graph illustrated in Fig. <ref> to show that the variables do converge to the optimal solution. Using CVX, we get the optimal solution of (<ref>): w_1^(1)*=w_2^(3)*=1, w_1^(2)*=w_2^(2)*=0.5 and λ_2^*=1.5. Forward Euler method is used to discretize (<ref>), (<ref>) and we choose ε=0.01, time step size Δ t=0.01.
Fig. <ref> shows that the edge weights converge to the optimal solution.
Now we consider a more complicated graph generated by ten nodes.
Forward Euler method is also used to discretize (<ref>), (<ref>) and we choose different ε and time step size Δ t to illustrate the effect of ε on Δ t. According to <cit.>, the choice of ε affects the Lipschitz constant of ∇_X f_ε(X) as well as the Hessian of f_ε(X). The smaller ε is, the bigger the Lipschitz constant of ∇_X f_ε(X) will be. Hence intuitively, bigger Lipschitz constant of the gradient implies a smaller step size to avoid the case of moving around in the neighbourhood of optimum without converging.
Using CVX, we know the optimal value of (<ref>) is 1.141. We do the simulation for t∈[0,50] with different ε and time step sizes. In the end, we get λ_2(L_w) equals to 1.085, 1.091 and 1.128 when choosing ε=10^-2, Δ t=10^-3, ε=10^-3, Δ t=10^-3 and ε=10^-3, Δ t=10^-4, respectively. As illustrated in Fig. <ref>, the algebraic connectivity in the network does not converge to the optimal value of the unrelaxed and “centralized” problem (P_c). However, as we decrease ε, the limiting algebraic connectivity gets closer to the optimal value of (P_c). This illustrates the relaxation effect. In addition, the evolution of λ_2(L_w) involves a lot of oscillations when ε=10^-3 and Δ t=10^-3, while it behaves much nicer when Δ t=10^-4. This emperically shows that smaller ε requires smaller time step length.
Now we do a numerical example on a graph with 50 nodes. The optimal value of (<ref>) is computed by CVX. The evolution of λ_2(L_w) is illustrated in Fig. <ref>. We choose ε=10^-4 and Δ t=10^-2. In this case, the approximation error ε Nln N≈ 0.0196 and we can see from Fig. <ref> that the gap between the limiting algebraic connectivity and the optimal value computed by CVX is below 0.0196.
Fig. <ref> illustrates that a smaller ε requires a smaller step size when discretizing system (<ref>), (<ref>). This means that when the number of nodes N goes large, in order to get a good approximation of (<ref>), we need a very small ε and hence it leads to a very small step size. This would result in the slow evolution of the system states per iteration and hence requires a large number of iterations to reach the equilibrium.
One practical solution to the issue above is presented as follows. We can solve the problem above by modifying ∑_k∈ℰ(i)w_k^(i)=1 in (<ref>) as ∑_k∈ℰ(i)w_k^(i)=a, where a>0. We call the modified optimization problem and its relaxed nonlinear programming problem as (P_M) and (NP_M) respectively. By checking the optimality conditions (<ref>), we conclude that {λ_2^(i)*,μ^(i)*,{w_k^(i)*},Z^(i)*} is the optimal solution to (<ref>) iff {aλ_2^(i)*,aμ^(i)*,a{w_k^(i)*},aZ^(i)*} is the optimal solution to (P_M) (since all the optimality conditions are linear). Using Proposition <ref>, we can conclude that -∑_i∈𝒱λ_2^(i)*≤1/a∑_i∈𝒱λ_max(X̂^(i)*)≤1/a∑_i∈𝒱f_ε_i(X̂^(i)*)≤ -∑_i∈𝒱λ_2^(i)*+1/aε Nln N provided that {μ̂^(i)*,{ŵ_k^(i)*},Ẑ^(i)*} is the optimal solution to (NP_M). Therefore, apart from choosing ε to be small, we can choose a sufficiently large, solve (NP_M) and divide the optimal weight realization obtained from (NP_M) by a to suppress the approximation error. Namely, we do not need to choose ε to be too small so that the time step size does not need to be too small. Therefore, the number of iterations needed to reach the equilibrium is suppressed when N goes large.
Consider a graph with 30 nodes. If we do not use the methodology above, namely, a=1, ε needs to be at least 2.1416× 10^-5 so that the relative approximation error of the optimal algebraic connectivity ε Nln N/λ_2(L_w^*) is within 5%. For comparison, if we fix ε=0.5 first, and choose a such that the relative approximation error is within 5%. Multiple time steps have been tried and the largest ones such that the discretized systems converge are illustrated in Fig. <ref>. Same initial values and forward Euler discretization are used. The algebraic connectivity that uses the methodology mentioned above uses much fewer iterations to converge to the optimal value.
Consider the same graph used in Example <ref>. We use forward Euler for discretization. Same initial values, time step sizes and ε are used (ε=0.5, Δ t=0.02). Fig. <ref> shows how a affects the approximation error.
The complexity per iteration for agent i is 𝒪(|ℰ(i)|· N^2). Since the right hand side of (<ref>), (<ref>) involves only with special matrices such as 1, E_k and B_l, one does not need to do matrix multiplication and hence the complexity is greatly reduced.
We test the algorithm on larger scale networks and plot the ratio between running time and |ℰ(i)|· N^2 versus N. Forward Euler is used for discretization. To eliminate the influence of the network topologies and number of edges on the convergence, we choose the same families of graphs and let N varies. We consider the family of ring graphs and complete graphs. ε=0.5,Δ=0.01 and a is chosen such that the relative approximation error of the objective function is within 5%. The iterations are terminated when the infinity norm of the right hand side of (<ref>), (<ref>) is smaller than 10^-3. The result is shown as Fig. <ref>. It can be seen that the ratio between running time and |ℰ(i)|· N^2 is approximately constant when N changes.
§ CONCLUSION
In this paper, a projected saddle point dynamics of augmented Lagrangian is presented to solve, not necessary strictly, convex optimization problems. As a supplement to the analysis in <cit.>, we show that the projected saddle point dynamics converges to one of the saddle points.
Moreover, the problem of distributedly maximizing the algebraic connectivity of an undirected communication network by optimizing the port gains of each nodes (base stations) is considered. The original SDP problem is relaxed into an NP problem and then the aforementioned projected dynamical system is applied to solve the NP. Numerical examples are used to illustrate: 1. the convergence of the edge weights to one of the optimal solutions; 2. the effect of ε on the choice of time step size; 3. complexity per iteration of the algorithm. A methodology is presented so that the number of iterations needed to reach the equilibrium is suppressed.
model5-names
|
http://arxiv.org/abs/1701.07688v2 | 20170126132726 | Nonclassical distance in multimode bosonic systems | [
"Ranjith Nair"
] | quant-ph | [
"quant-ph"
] |
elernair@nus.edu.sg
Department of Electrical and Computer Engineering,
National University of Singapore, 4 Engineering Drive 3, 117583 Singapore
We revisit the notion of nonclassical distance of states of bosonic quantum systems introduced in [http://journals.aps.org/pra/abstract/10.1103/PhysRevA.35.725M. Hillery, Phys. Rev. A 35, 725 (1987)] in a general multimode setting. After reviewing its definition, we establish some of its general properties. We obtain new upper and lower bounds on the nonclassical distance in terms of the supremum of the Husimi function of the state. Considering several examples, we elucidate the cases for which our lower bound is tight, which include the multimode number states and a class of multimode N00N states. The latter provide examples of states of definite photon number n ≥ 2 whose nonclassical distance can be made arbitrarily close to the upper limit of 1 by increasing the number of modes. We show that the nonclassical distance of the even and odd Schrödinger cat states is bounded away from unity regardless of how macroscopic the superpositions are, and that the nonclassical distance is not necessarily monotonically increasing with respect to macroscopicity.
Nonclassical distance in multimode bosonic systems
Ranjith Nair
December 30, 2023
==================================================
§ INTRODUCTION
Consider M independent bosonic
systems such as electromagnetic modes or nanomechanical oscillators and let H be the Hilbert space describing the entire system. Let S = S(H) be the system's state space, i.e., the set of all density operators on H. A distinguished subset of the state space S is the set S_cl of classical states. For α = α_1α_2⋯α_M, α∈ℂ^M an M-mode coherent state[For clarity, in this paper we use curved bras and kets α, β, etc. to indicate coherent states, and the usual bra and ket symbols ⟨ϕ|, |ψ⟩, etc. for general states. For n = 0, 1, 2, …, |n⟩ denotes the number state of n photons.], S_ cl consists of the states σ that have a non-negative diagonal-coherent-state or P representation <cit.>:-
σ = ∫_ℂ^M^2Mα Pα αα,
where Pα≥ 0 is a probability distribution, possibly with delta-function singularities. We will use the language of quantum optics in this paper, but many of the ideas have counterparts in other continuous-variable systems with effective harmonic-oscillator Hamiltonians. Classical states are important in quantum optics as models for the radiation from commonly-occurring natural sources. Their photodetection statistics can be described in semiclassical terms <cit.>, and they are readily generated in the laboratory from laser sources.
Many characteristic quantum-optical phenomena such as sub-Poissonian photon statistics, antibunching, quadrature squeezing, and entanglement are displayed only by nonclassical states, i.e., states ρ∈ S_ncl = S \ S_cl. Several nonclassical states have been introduced theoretically and generated experimentally for various applications, well-known examples being the number states of one or more photons, the single-mode and two-mode squeezed coherent states, the Schrödinger cat states, and the N00N states – see, e.g., refs. <cit.> and references therein for these and many other examples. Since the absence of any one of the above characteristic quantum features is not a guarantee that a given state is classical, much attention has been focused on operational necessary criteria for nonclassicality – see, e.g., refs. <cit.>.
Applications like quantum cryptography, quantum computation, and quantum metrology rely on nonclassical states to gain a quantum advantage not achievable using classical states. In view of the fact that classical states are much easier to generate than tailored nonclassical ones, and that the latter are very sensitive to decoherence, a quantitative measure of the nonclassicality of a given quantum state is very useful.
Perhaps the earliest such measure proposed is the nonclassical distance introduced by Hillery in ref. <cit.>. It is defined as the minimum trace-norm distance between the given state and states in the set S_ cl, and provides an upper bound on the Kolmogorov (or l_1-) distance between the probability distributions obtained on measuring the given state and an arbitrary classical state using any quantum measurement. As such, it provides a generic bound on the advantage that the given state can provide over classical ones in any task of interest. Unfortunately, calculating the nonclassical distance is a difficult problem in general, and to the best of our knowledge, no exact results have appeared in the literature. However, several upper and lower bounds on it have been given <cit.>.
Many other measures of nonclassicality have since been defined in the literature. In ref. <cit.>, Lee introduced the notion of nonclassical depth of a single-mode state, defined as the minimum number of additive Gaussian noise photons required to render the state classical. Techniques for calculating it were also given <cit.>. The same concept was independently defined by Lütkenhaus and Barnett <cit.>. While the nonclassical depth is an informative measure of nonclassicality <cit.> for Gaussian states <cit.>, it was shown in <cit.> that it has the maximal value of 1 noise photon for any pure non-Gaussian nonclassical state, rendering it of limited value as a nonclassicality measure for such states. Very recently, Sabapathy has generalized the nonclassical depth to multimode states and also to quantum channels <cit.>.
Partly motivated by the difficulties of calculating the trace-norm distance (and hence the nonclassical distance), several other distance-based measures of nonclassicality have been proposed. Dodonov et al. <cit.> introduced the Hilbert-Schmidt distance between a given state and the set of pure coherent states as a measure of nonclassicality. Marian et al. <cit.> studied the minimum Bures distance <cit.> (closely related to the quantum fidelity <cit.>) between single-mode Gaussian states and the set of classical Gaussian states. Malbouisson and Baseia <cit.> studied the Bures distance and the Hilbert-Schmidt distance of more general states relative to the set of pure coherent states. Marian et al. <cit.> studied the nonclassicality of single-mode Gaussian states using the minimum relative entropy to the set of classical Gaussian states as a `distance' measure. A measure defined using the Wehrl entropy has recently been explored by Bose <cit.>. Unfortunately, in these works, the chosen distance measure has been minimized over only a subset of S_ cl chosen in a more or less ad hoc manner.
In ref. <cit.>, Asbóth et al. exploited the close connection between input nonclassicality and generation of entanglement at the output of a passive linear optics network to define and estimate new nonclassicality measures. Vogel and co-workers have used the minimum number of terms in an expansion of the given state as a superposition of coherent states to define an algebraic nonclassicality measure <cit.>. Nonclassicality and entanglement are notoriously fragile under the action of decohering channels, the attenuator, additive Gaussian noise, and amplifier channels <cit.> being particularly ubiquitous in applications. The degradation of nonclassicality and entanglement under loss and additive noise has also been extensively studied – see, e.g., refs. <cit.>.
In this paper, we revisit the nonclassical distance of ref. <cit.> in a general multimode setting. In Section <ref>, we motivate and review its definition. In Section <ref>, we establish a number of its general properties. In Section <ref>, we establish improved upper and lower bounds on it in terms of the Husimi function of the given state. In Section <ref>, we consider several examples illustrating our results. In particular, we show that our lower bound is saturated for multimode number states and a class of multimode N00N states, and very nearly saturated for even and odd Schrödinger cat states in the “macroscopic” regime. We conclude by discussing several possible directions for future work in Section <ref>.
§ NONCLASSICAL DISTANCE
Given an arbitrary state ρ∈ S, its nonclassical distance δ(ρ) is defined as
δ(ρ) := inf_σ∈ S_cl D(ρ, σ) = 1/2inf_σ∈ S_clρ - σ_1,
where D(ρ, σ) is the trace distance between ρ and σ, with the latter of the form of Eq. (<ref>). X_1 := √(X^† X) is the trace norm of the trace-class operator X. Note that our definition differs from that in <cit.> by a factor of 1/2. Following a convention often used in quantum information <cit.>, we have used the trace distance to measure the separation between ρ and S_ cl – the nonclassical distance then satisfies 0 ≤δ(ρ) ≤ 1. A classical state evidently has zero nonclassical distance. We show later that δ(ρ) <1 for all ρ∈ S (see Sec. <ref>). In Sec. <ref>, we show that δ(ρ) >0 for all ρ∈ S_ ncl so that δ(ρ) > 0 is a necessary and sufficient condition for nonclassicality.
To appreciate the utility of the definition (<ref>), consider an arbitrary positive operator-valued measure (POVM) <cit.> {Π(x)}_x ∈X describing a quantum measurement yielding an outcome x in an arbitrary measurable space X and let σ∈ S_ cl. The Kolmogorov distance between the classical probability densities P_ρ(x) = ρΠ(x) and P_σ(x) = σΠ(x) resulting from measuring the given POVM on ρ and σ is
KP_ρ, P_σ := 1/2∫_X x P_ρ(x) - P_σ(x)
= 1/2∫_X x (ρ - σ) Π(x)
≤1/2∫_X x ρ - σΠ(x)
= 1/2ρ - σ_1,
where the inequality follows from the fact that X ≤X = √(X^† X) for Hermitian and trace-class X and that Π(x) ≥ 0. As is well known, the inequality is saturated by the Helstrom-Holevo measurement <cit.>. It follows that
inf_σ∈ S_ clmax_{Π_x} KP_ρ, P_σ = inf_σ∈ S_ cl D(ρ, σ) = δ(ρ).
The nonclassical distance thus provides a measurement-independent (and hence, application-independent) quantification of the minimum distinguishability between the probability distributions generated by the given state and any classical state. In other words, it generically quantifies the possible advantage that can be gained from the given nonclassical state (which is an “expensive” resource in being typically hard to generate and preserve) over classical states (which are “cheap” resources as they are easy to generate). Unfortunately, the nonclassical distance is difficult to calculate in general, owing both to the difficulty of calculating the trace distance in Eq. (<ref>) (which requires diagonalizing ρ - σ) and to having to minimize it over all of S_ cl. Both these difficulties are compounded by the infinite dimensionality of H.
§ GENERAL PROPERTIES OF Δ(Ρ)
We now establish a number of useful properties of δ(ρ), many of which are used in the examples to follow. Most of the properties are intuitive, but (unless indicated) have not been formally stated in the literature to the best of our knowledge. Some of the results – such as the positivity of δ(ρ) for all ρ∈ S_ ncl – are not obvious.
§.§ Positivity for ρ∈ S_ ncl
S_ cl has an interesting topological structure within S. Using an ingenious argument, it was shown in <cit.> that there are nonclassical states arbitrarily close in trace distance to any single-mode classical state. This argument is readily adapted to the multimode case, showing that S_ ncl is dense in S with respect to the trace distance.
On the other hand, it was also shown in <cit.> that the non-vacuum number states (which are nonclassical[From the definition (<ref>), we see that a non-vacuum classical state must have a positive probability of counting any number of photons in some mode. Thus, the non-vacuum multimode number states are nonclassical.]), are interior points of S_ ncl with positive nonclassical distance. In order to generalize this statement to all ρ∈ S_ ncl, we note that the convex set S_ cl is closed with respect to the weak topology on S <cit.> and is therefore also closed with respect to trace distance (<cit.>, Lemma 11.1). Equivalently, S_ ncl is open in S so that the nonclassical distance is strictly positive for all ρ∈ S_ ncl. Positivity of δ(ρ) is thus a necessary and sufficient condition for ρ to be nonclassical.
This juxtaposition of S_ cl and S_ ncl is analagous to that of the set of separable states and the set of non-separable states of a bipartite system of which at least one subsystem is infinite-dimensional. For such systems, it was shown in <cit.> that the set of nonseparable states is open and dense in the set of all states with respect to trace distance.
§.§ Non-increase under classicality-preserving channels
Let T: S → S' be a quantum channel (a completely positive trace-preserving map) <cit.> between the (not necessarily identical) state spaces S and S' that takes classical states to classical states, i.e., T S_ cl⊂ S'_ cl. Such channels are exactly the ones defined as “classical channels” in <cit.> and include the more restrictive nonclassicality-breaking channels <cit.>. We have
δ(Tρ) = inf_σ^'∈ S'_ cl D(Tρ, σ')
≤inf_σ∈ S_ cl D(Tρ, Tσ)
≤inf_σ∈ S_ cl D(ρ, σ)
= δ(ρ),
where the first inequality follows from the assumption that T is classicality-preserving and the second follows from the non-increase of trace distance under the action of quantum channels.
§.§ Invariance to adjoining a classical state
Suppose we have two subsystems A and B each consisting of one or more modes. Let ρ∈ S^A be a state of A and σ_0 ∈ S_ cl^B be a classical state of B. We have
δ(ρ⊗σ_0) = δ(ρ).
To see this, first note that
δ(ρ⊗σ_0) ≥δ(ρ)
from Sec. <ref> since taking the partial trace over B is a classicality-preserving quantum channel. On the other hand, if σ^A ∈ S_ cl^A, σ^A ⊗σ_0 ∈ S_ cl^AB, and we have
ρ⊗σ_0 - σ^A ⊗σ_0 _1 = ρ - σ^A _1,
so that
δ(ρ⊗σ_0) ≤1/2inf_σ^A ∈ S_ cl^Aρ⊗σ_0 - σ^A ⊗σ_0 _1
= δ(ρ).
Thus, adjoining a classical state does not change the nonclassical distance, as may be expected intuitively.
§.§ Invariance under affine optics transformations
Let a = (a_1,…, a_M) ^ be the vector of annihilation operators corresponding to the M modes of the system. Consider the Heisenberg-picture unitary transformation
b = Ua + γ,
where b = (b_1,…, b_M)^ is the output vector of annihilation operators, U is an M × M unitary matrix, and γ = (γ_1, …, γ_M)^ is an arbitrary vector in ℂ^M. Such a transformation corresponds to the most general unitary transformation that can be performed on the input modes using passive linear optics, augmented by displacements in each mode by amounts given by γ (hence the terminology “affine optics transformation”). The corresponding quantum channel T_U,γ maps (in the Schrödinger picture) product coherent states into product coherent states according to
α↦ Uα + γ.
We thus have T_U,γ S_ cl = S_ cl so that for any ρ∈ S,
δ(T_U,γρ) = inf_σ∈ S_ cl D(T_U,γ ρ, σ)
=inf_σ∈ S_ cl D(T_U,γ ρ, T_U,γ σ)
=inf_σ∈ S_ cl D(ρ, σ)
= δ(ρ),
where we have used the unitary invariance of the trace distance.
The above result may be viewed as a quantitative generalization to M-mode affine optics transformations of the well-known fact that 2-port beamsplitters cannot generate nonclassical states from classical ones <cit.>. A similar invariance result for the Bures distance has been stated by Marian et al. <cit.>, and the special case of the above result for pure dispacements was shown in ref. <cit.>. The preceding two properties imply that nonlinear processes of second order or higher are required to increase the nonclassical distance of a given state augmented by auxiliary modes in classical states.
§.§ Convexity
The nonclassical distance is convex in ρ, as shown in ref. <cit.>. Indeed, let ρ_1,ρ_2 ∈ S, σ_1,σ_2 ∈ S_ cl, and ρ = (1-ϵ) ρ_1 + ϵρ_2 for any ϵ with 0 ≤ϵ≤ 1. The state σ = (1-ϵ) σ_1 + ϵσ_2 ∈ S_ cl and we have
ρ - σ_1 ≤ (1-ϵ) ρ_1 - σ_1_1 + ϵρ_2 - σ_2_1
by convexity of the trace norm, so that
inf_σ' ∈ S_ clρ - σ'_1
≤ (1-ϵ) inf_σ_1 ∈ S_ clρ_1 - σ_1_1 + ϵinf_σ_2 ∈ S_ clρ_2 - σ_2_1,
and so
δ(ρ) ≤ (1-ϵ) δ(ρ_1) + ϵ δ(ρ_2).
§ UPPER AND LOWER BOUNDS ON THE NONCLASSICAL DISTANCE
The Husimi Q function is a quasiprobability distribution that plays a major role in theoretical quantum optics <cit.> and is also experimentally accessible via heterodyne detection <cit.>. For given ρ∈ S and α∈ℂ^M, it is defined as
Q_ρ(α) = αρα/π^M
and is a normalized true probability density. Some of the bounds on the nonclassical distance
derived in refs. <cit.> involve the supremum of the Husimi function of the state. For a general state ρ∈ S, let us define
Q_ρα := αρα = π^M Q_ρ(α);
m(ρ) := sup_α∈ℂ^MQ_ρ(α).
In this Section, we make use of relations between the quantum fidelity and trace distance <cit.> in order to obtain stronger upper and lower bounds on δ(ρ) in terms of mρ than those in ref. <cit.>. These relations also allow for more direct derivations and make the cases for which the bounds are saturated more transparent.
§.§ Lower bounds
The fidelity between any two states ρ and σ is given by F(ρ, σ) = √(√(ρ)σ√(ρ)) <cit.>. The trace distance and fidelity obey the inequality <cit.>:-
1-F(ρ, σ) ≤ D(ρ,σ).
If ρ∈ S is given and σ∈ S_cl is a classical state of the form (<ref>), this gives the lower bound
1 - sup_σ∈ S_cl F(ρ, σ) ≤δ(ρ)
on the nonclassical distance of ρ. Since fidelity is a concave function of its arguments <cit.>, this expression does not readily simplify on substituting Eq. (<ref>) for σ.
For pure states ρ = |ψ⟩⟨ψ|, a stronger inequality than (<ref>) holds <cit.>[We use expressions like δ(|ψ⟩), F(|ψ⟩, σ), D(|ψ⟩, σ), etc. with the obvious meanings when pure states are involved.]:
1-F^2(|ψ⟩, σ) = 1 -⟨ψ|σ|ψ⟩≤ D(|ψ⟩,σ).
For σ of the form of Eq. (<ref>), this gives the lower bound
δ(|ψ⟩) ≥ 1- sup_Pα∫_ℂ^M^2Mα Pα αψ⟩^2
= 1- m|ψ⟩,
since P(α) ≥ 0 and integrates to one. This strengthens (by a factor of 2) the bound (4.2) of <cit.> specialized to pure states.
Since m⊗_m=1^M|ψ⟩_m = Π_m=1^M m|ψ_m⟩, we have that
δ⊗_m=1^M|ψ_m⟩≥ 1 - Π_m=1^M m|ψ_m⟩.
In particular, the nonclassical distance of a product of identical pure nonclassical states approaches 1 at least exponentially fast in the number of copies.
§.§ Upper bounds
We can also obtain upper bounds on δ(ρ) via the fidelity. Using the inequality <cit.>
D^2(ρ,σ) ≤ 1 - F^2(ρ,σ),
we have for any classical state σ of the form of Eq. (<ref>),
D^2(ρ,σ)
≤ 1 - Fρ, ∫_ℂ^M^2Mα Pα αα^2
≤ 1- ∫_ℂ^M^2Mα Pα Fρ, αα^2,
using the concavity of fidelity. It follows that
δ(ρ) = inf_P(α) D(ρ, σ)
≤{1 -sup_P(α)∫_ℂ^M^2Mα Pα Fρ, αα^2}^1/2
= 1 - m(ρ)^1/2.
This result is a generalization of the bound (4.14) of <cit.> to mixed states.
Since m(ρ) > 0 (Otherwise, Q_ρ(α) ≡ 0, which is impossible since ∫_ℂ^M^2Mα Q_ρ(α) =1), the upper bound shows that δ(ρ) <1 for all states.
If ρ = |ψ⟩⟨ψ| is pure and the closest classical state (assuming one exists) to |ψ⟩ is a pure coherent state, the above upper bound is an equality because all the inequalities from (<ref>) to (<ref>) are saturated in this case. A coherent state α_⋆ that satisfies m|ψ⟩ = α_⋆ψ^2 indeed achieves the greatest possible fidelity among σ∈ S_ cl (and hence the smallest possible Bures distance) with a given |ψ⟩∈ S_ ncl, as the case of saturation of (<ref>) shows[Thus, for pure states |ψ⟩, the lower bound (<ref>) in fact determines the minimum Bures distance with respect to S_ cl, making the restriction to the pure states of S_ cl imposed in <cit.> unnecessary.]. However, it is not the case in general that a pure coherent state is the closest classical state in trace distance to a given pure nonclassical state, as we will see in Section <ref>.
Overall, for pure states |ψ⟩, we thus have the two-sided Q-function-based bounds
1 - m|ψ⟩≤δ|ψ⟩≤1 - m|ψ⟩^1/2.
Using the triangle inequality for the trace distance, we can relate the nonclassical distances of two states ρ and ρ' as follows. For any σ∈ S_ cl, we have
D(ρ',σ) ≤ D(ρ,ρ') + D(ρ,σ),
so that
δ(ρ) - δ(ρ')≤ D(ρ,ρ'),
giving an upper (lower) bound on the larger (smaller) of δ(ρ) and δ(ρ').
§ EXAMPLES
We now illustrate our general results of Secs. <ref>-<ref> with a few examples, and also obtain the exact value of the nonclassical distance for some states.
§.§ Multimode number states
First consider the single-mode number states |n⟩, n=0,1,2,… Following ref. <cit.>, let us define the numbers
γ_n := sup_α∈ℂαn^2
= sup_x ≥ 0 e^-xx^n/n!
= {[ 1 n=0,; e^-nn^n/n! ].
γ_n is thus the probability that a Poisson random variable of mean n takes the value n. It can be verified that {γ_n} is a decreasing sequence and that it satisfies
e^-n≤γ_n ≤1/√(2π n),
where the upper bound follows from Stirling's approximation for n! and is the asymptote of γ_n for large n. The lower bound (<ref>) for the number state |n⟩ is then
δ(|n⟩) ≥ 1 -γ_n.
Consider the classical state
∘σ_n = 1/2π∫_0^2πθ√(n) e^iθ√(n) e^iθ
= e^-n∑_m=0^∞n^m/m!|m⟩⟨m|,
which (as the overcircle on σ suggests) is the uniformly phase-randomized coherent state of mean photon number n. The states ∘σ_n and |n⟩⟨n| commute and the trace distance between them is thus
D∘σ_n, |n⟩ = 1 -γ_n,
saturating the lower bound (<ref>).
Now let |n⟩ =|n_1⟩|n_2⟩⋯|n_M⟩ be a product M-mode number state. We have
m(|n⟩) = sup_α∈ℂ^Mαn^2 = ∏_m=1^M γ_n_m,
so that the lower bound
δ(|n⟩) ≥ 1 - ∏_m=1^M γ_n_m
holds. As for the single-mode case, |n⟩ is an eigenstate of the classical state
∘σ_n := ∘σ_n_1⊗⋯⊗∘σ_n_M
with eigenvalue ∏_m=1^M γ_n_m, so that
D∘σ_n, |n⟩ = 1 - ∏_m=1^M γ_n_m =
δ(|n⟩).
Since γ_n decreases with increasing n, we see that increasing the photon number in any mode increases the nonclassical distance, as may be expected. More interestingly, consider the case where the total photon number n = ∑_m=1^M n_m is fixed but the number of modes may be varied. We then have
δ(|n⟩) = 1 - ∏_m=1^M γ_n_m≤ 1 - ∏_m=1^M e^-n_m = 1 -e^-n,
with the maximum achieved by a product state with one photon in each mode. Thus, spreading out the available energy over many modes increases the nonclassical distance of a multimode number state.
§.§ Superposition states of definite total photon number
Consider now an M-mode single-photon state
|ψ⟩ = ∑_m=1^M c_m |0⟩⋯|0⟩_(m-1) modes|1⟩|0⟩⋯|0⟩_(M-m) modes,
with arbitrary coefficients {c_m}_1^M; ∑_m=1^M c_m^2 =1. We have, for α_2 = ∑_m=1^M α_m^2^1/2 the Euclidean norm on ℂ^M,
αψ^2 = e^-α_2^2∑_m=1^M α_m^* c_m ^2
≤ e^-α_2^2α_2^2 ∑_m=1^M c_m^2
≤γ_1 = e^-1,
which is achieved for α_m = c_m, so that 1- e^-1≤δ(|ψ⟩). In fact, |ψ⟩ can be obtained from the state |1⟩|0⟩⋯|0⟩ with nonclassical distance (1 -e^-1) by using a linear passive transformation of the form (<ref>) with γ =0 and any unitary U whose first column consists of the coefficients {c_m}_m=1^M. Therefore, using the properties of nonclassical distance from Sec. <ref>, any state of the form of Eq. (<ref>) has nonclassical distance (1 -e^-1) independent of the coefficients and the number of modes.
The situation for superposition states of two or more photons is very different. While the general case appears to be complicated, consider states of the form
|ψ⟩ =∑_m=1^M c_m|0⟩⋯|0⟩_(m-1) modes|n⟩|0⟩⋯|0⟩_(M-m) modes≡∑_m=1^M c_m |ψ_m⟩,
where n ≥ 2, and the coefficients {c_m} are arbitrary. For α∈ℂ^M, we have
αψ^2 = e^-α_2^2/n!∑_m=1^M α_m^*^n c_m^2
≤e^-α_2^2/n!∑_m=1^M α_m^n^2 max_m c_m^2
= e^-α_2^2/n!α_n^2nmax_m c_m^2
≤e^-α_2^2/n!α_2^2nmax_m c_m^2.
Here, α_p = ∑_m=1^M α_m^p^1/p is the l_p-norm on ℂ^M and we have used the inequality α_p ≥α_q for p <q <cit.>. The inequalities above are saturated by choosing α = (0,…, 0, √(n) e^-iθ_m_*/n, 0,…, 0), where m_* =maxc_m and θ_m = ∠ c_m, so that
m(|ψ⟩) = γ_nmax_m c_m^2.
The lower bound (<ref>) is then maximized by choosing c_m = 1/√(M) for all m, in which case
δ(|ψ⟩) ≥ 1- γ_n/M.
Thus, in contrast to the single-photon case, for any n≥2, the nonclassical distance of the state (<ref>) with c_m = e^iθ_m/√(M) can be made arbitrarily close to 1 by increasing the number of modes. We call such states multimode N00N states, the usual N00N states <cit.> being recovered for M=2 and c_m =1/√(2).
We can show that the Husimi-function lower bound is achieved for multimode N00N states. For m=1,…, M, let
σ^(m) = 00⊗⋯00_(m-1) modes⊗∘σ_n⊗00⊗⋯00_(M-m) modes,
so that for each term in the superposition (<ref>),
σ^(m)|ψ_m'⟩ = γ_n δ_mm'|ψ_m'⟩.
The classical state
σ = 1/M∑_m=1^M σ^(m)
satisfies
σ|ψ⟩ = γ_n/M|ψ⟩,
so that
D(σ, |ψ⟩) = 1 -γ_n/M = δ|ψ⟩.
In particular, for the N00N state |χ_n⟩ = |n⟩|0⟩ + e^iθ |0⟩|n⟩/√(2) with n photons, we have
δ|χ_n⟩ = {[ 1 - γ_1 n=1,; 1 -γ_n/2 ].
which for n ≥ 2 is greater than that of the single-mode number state |n⟩ and may be seen as a consequence of the entanglement in the former.
Observe that |χ_2⟩ has the nonclassical distance distance 1-e^-2 = δ(|1⟩|1⟩), as dictated by linear optics invariance.
§.§ Attaining the Q-function lower bound
The above examples prompt the general question: “Which pure states |ψ⟩ saturate the lower bound (<ref>)?” The derivation of (<ref>) indicates that for a classical state σ to saturate it, two conditions must hold:– (a) We must have equality in (<ref>) and (b) we must have F^2|ψ⟩,σ = m|ψ⟩. The derivation of (<ref>) in turn shows that condition (a) holds only if |ψ⟩ is an eigenvector of σ <cit.>. Condition (b) holds only if σ is a mixture of coherent states that all (other than a set of probability zero) have the maximum possible overlap m|ψ⟩ with |ψ⟩. It can be verified that the examples considered so far satisfy these conditions.
An important class of states for which the lower bound cannot be saturated are the multimode Gaussian pure states <cit.>. Indeed, these have Gaussian Q functions that are maximized at exactly one phase-space point – the mean vector of the given state. Thus, condition (b) above cannot be satisfied for any pure nonclassical Gaussian state, and hence also the lower bound (<ref>). It is an open question if the upper bound in (<ref>) is saturated for these states.
§.§ Even and Odd Schrödinger cat states
Consider the even and odd superpositions of the single-mode coherent states ±β:
|ψ_±⟩ = β±-β/√(2N_±),
i.e., the even and odd coherent states introduced in ref. <cit.> (see also <cit.>). Linear optics invariance implies that in order to examine the nonclassical distance of these states, we can set the amplitude β >0 without loss of generality. The normalization constants are
N_± = 1 ± e^-2β^2.
Such states have been generated in microwave cavities <cit.>, in the motional degree of freedom of trapped ions <cit.>, in traveling optical beams <cit.> and other systems <cit.>. For large values of β, they are an example of the “Schrödinger cat” states <cit.> involving superpositions of macroscopic states, and are of great interest in studies of the quantum-classical divide <cit.>. The nonclassicality of the states (<ref>) has been studied from the point of view of Bures distance in <cit.>.
The Q functions of the states (<ref>) are:
Q_|ψ_+⟩(α) = (β^2) e^-α^2coshβα^2,
Q_|ψ_-⟩(α) = (β^2) e^-α^2sinhβα^2.
Evaluating the bounds (<ref>) requires obtaining the maximum of these functions over α∈ℂ.
In finding the maximum of the Q function for the even coherent state, two cases arise. If β≤ 1, the Q function has a single maximum at the origin, while if β >1, there are two maxima of equal height at ±α_* where 0 < α_* < β is the nonzero solution of βtanh (βα_*) = α_* (see inset to Fig. 1). For any value of β > 0, it can be shown that the maxima of the Q function of the odd coherent state occur on the real axis at ±α_* satisfying β (βα_*) = α_* (see inset to Fig. 2). In the limit β→ 0, |ψ_-⟩→|1⟩, the one-photon state, so that the Q function is maximized at any coherent state of the form e^iθ in that limit.
Consider the classical incoherent mixture
σ_β = 1/2ββ+ 1/2-β-β
of the states ±β. For any value of β, |ψ_±⟩ are eigenvectors of σ_β:
σ_β|ψ_±⟩ = N_±/2|ψ_±⟩,
so that we have the upper bounds
δ(|ψ_±⟩) ≤ D(|ψ_±⟩,σ_β) = 1 ∓ e^-2β^2/2.
Similarly, the trace distance to the classical state
σ_α_* = 1/2α_*α_*+ 1/2-α_*-α_*
yields an upper bound on the nonclassical distance of |ψ_±⟩.
For the even coherent state, the Q function-based bounds, together with the upper bounds from Eqs. (<ref>)-(<ref>) are shown in Fig. 1. The Q-function upper bound (<ref>) is tighter than the upper bound corresponding to σ_β for β≲ 0.7, while the latter coincides with the lower bound (<ref>) for all practical purposes if β≳ 1.2. Indeed, the maximizer α_* of the Q-function equals zero for β≤ 1, so that the lower bound cannot be tight in this regime, as explained in Section <ref>. For β >1, the two maximizers at ±α_* have α_* < β but approach ±β very rapidly (see inset to Fig. 1 – this was also noted in ref. <cit.>), so that the conditions in Section <ref> are very nearly satisfied. The upper bound (<ref>) shows that no matter how “macroscopic” the amplitude β gets, the nonclassical distance of |ψ_+⟩ approaches a maximum of 1/2. In comparison, the single-photon state has the greater nonclassical distance 1-e^-1≃ 0.6321.
For the odd coherent state, the Q function-based lower bound, together with the upper bounds from Eqs. (<ref>)-(<ref>) as well as the distance to the uniformly phase-randomized state ∘σ_α_*^2 are shown in Fig. 2. The Q-function upper bound (not shown) is looser than one of the bounds shown at all values of β. In the limit of β→ 0, the lower bound coincides with the distance to ∘σ_α_*^2, as it should since the state approaches the one-photon number state with nonclassical distance 1-e^-1. For β≳ 0.5, |ψ_-⟩ is closer to σ_α_* than to ∘σ_α_*^2. The upper bounds from σ_α_* and σ_β reach the asymptotic value 1/2 of the lower bound for all practical purposes for β≳ 1.5. As for the even coherent state, this is because the maximizer α_* →β for large β, leading to the conditions for saturation of the Q-function lower bound being very nearly satisfied. From the bounds presented here, we see that the nonclassical distance of the odd coherent state is bounded above by ∼ 0.65 for any value of β and appears to be a decreasing function of β.
Mixing the states (<ref>) with the vacuum at an η: 1-η beam splitter gives rise to a two-mode entangled coherent state <cit.>
|ψ'_±⟩ = √(η) β√(1-η) β±-√(η) β-√(1-η) β/√(2N_±).
If √(η) β≪ 1 and √(1-η) β≫ 1, the state exhibits micro-macro entanglement in the spirit of the Schrödinger-cat thought experiment and has also been realized experimentally <cit.>. Linear optics invariance dictates that the state (<ref>) has the same nonclassical distance as the original single-mode cat state regardless of the value of η. The degree of entanglement of entangled coherent states has been studied by several authors <cit.>. Since entanglement between the output modes of a beam splitter is closely related to nonclassicality at its input <cit.>, the quantitative relations between nonclassical distance of the input state and the entanglement entropy at the output merit further investigation.
§.§ Mixture of vacuum and number state
As a final example, consider the state
ρ = (1-η)|0⟩⟨0| + η|n⟩⟨n|
that mixes a vacuum component with a number-state component with n ≥ 1, and is nonclassical for any η>0 because it has zero probability of counting m photons for m ≠ 0,n. The nonclassicality of the n=1 case was extensively studied in <cit.>, while the nonclassical depth of this state was considered in <cit.>. Convexity of the nonclassical distance gives the upper bound
δ(ρ) ≤η δ(|n⟩) = η (1-γ_n).
The lower bound (<ref>) is hard to compute, but since D(ρ, |n⟩) = 1 -η, we can use (<ref>) to get
η - γ_n ≤δ(ρ),
which is useful if η > γ_n. These upper and lower bounds are shown in Fig. 3.
Using an argument similar to that in Sec. <ref>, we can say something about the form of a classical state σ that satisfies δ(ρ) = D(ρ,σ). Consider the quantum channel M:S → S corresponding to making an ideal measurement in the number basis that maps a state τ into
Mτ := ∑_n=0^∞|n⟩⟨n|τ|n⟩⟨n|
= 1/2π∫_0^2πθ e^-iθ a^† a τ e^iθ a^† a.
The last equation shows that M can be implemented as a randomized phase shift over [0,2π] and is hence classicality-preserving. We also have Mρ =ρ. Therefore, for any σ∈ S_ cl,
D(ρ,σ) ≥ D(Mρ,Mσ) = D(ρ, Mσ) with Mσ∈ S_ cl. Therefore, it suffices to restrict the optimization to all classical states diagonal in the number basis, i.e., those with a circularly symmetric P function. However, this latter optimization appears to be non-trivial and may require a numerical approach.
§ DISCUSSION AND OUTLOOK
We have revisited the nonclassical distance defined in <cit.> in a multimode setting, studied its properties, developed new bounds on it, and elucidated the cases for which our Husimi-function lower bound is tight for pure states. The number states, multimode single-photon states, and multimode N00N states constitute, to the best of our knowledge, the first examples for which the nonclassical distance has been calculated exactly. Further work is needed to verify if our lower or upper bounds are tight for other important nonclassical states. Our Husimi-function lower bound (<ref>) can be used to show that the nonclassical distance of the one-mode (two-mode) squeezed vacuum states can be made arbitrarily close to unity by increasing the degree of squeezing (entanglement). However, as mentioned in Sec. <ref>, the lower bound cannot be tight for pure multimode Gaussian states. In view of the practical importance of these states, it would be useful to get good estimates of their nonclassical distance.
In ref. <cit.>, upper bounds on the nonclassical distance of a state in terms of its total noise <cit.> or average energy were derived. In view of the interplay between the photon number and the number of modes in determining the nonclassical distance for some of our examples, it would be interesting to seek upper bounds on the nonclassical distance of a state in terms of its total average energy and the number of modes M.
The generation of large-amplitude optical Schrödinger-cat states is of great interest from the viewpoints of both fundamental physics and applications such as optical quantum computation, with states of increasing amplitudes being generated in recent years <cit.>. Since the generation of large-amplitude coherent-state superpositions appears to be challenging, it is somewhat surprising that the nonclassical distance of the cat states is bounded away from unity regardless of the superposition amplitude. Further study is required to see if this indicates that alternative preparation strategies for large-amplitude cat states exist.
The attenuator and additive Gaussian noise channels preserve classicality and degrade nonclassicality <cit.>. In view of their ubiquity, it would be very useful to study quantitatively how the nonclassical distance degrades at the output of such channels.
{[ 1/√(T)exp-i 2π mt/T t ∈[-T/2,T/2]; 0 ].
§ ACKNOWLEDGEMENTS
I thank Mark Hillery, Krishna Kumar Sabapathy, and an anonymous referee for useful comments. This work is supported by the Singapore National Research Foundation under NRF Grant No. NRF-NRFF2011-07 and the Singapore Ministry of Education Academic Research Fund Tier 1 Project R-263-000-C06-112.
64
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Sudarshan(1963)]Sud63
author author E. C. G. Sudarshan, title title
Equivalence of semiclassical and quantum mechanical descriptions of
statistical light beams, 10.1103/PhysRevLett.10.277
journal journal Phys. Rev. Lett. volume 10, pages 277–279 (year
1963)NoStop
[Glauber(1963)]Gla63
author author Roy J. Glauber, title title Coherent and
incoherent states of the radiation field, 10.1103/PhysRev.131.2766 journal journal Phys.
Rev. volume 131, pages 2766–2788
(year 1963)NoStop
[Mandel and Wolf(1995)]MW95
author author Leonard Mandel and author Emil Wolf, @noop title Optical Coherence and
Quantum Optics (publisher Cambridge University Press,
Cambridge, year 1995)NoStop
[Dodonov(2002)]Dod02
author author VV Dodonov, title title
`Nonclassical' states in quantum optics: A `squeezed' review of the first
75 years, 10.1088/1464-4266/4/1/201 journal journal Journal of Optics B: Quantum and
Semiclassical Optics volume 4, pages
R1 (year 2002)NoStop
[Agarwal(2012)]Agr12Quantum
author author Girish S Agarwal, @noop title Quantum
Optics (publisher Cambridge University Press, year 2012)NoStop
[Richter and Vogel(2002)]RV02
author author T. Richter and author W. Vogel, title title Nonclassicality
of quantum states: A hierarchy of observable conditions, 10.1103/PhysRevLett.89.283601 journal journal
Phys. Rev. Lett. volume 89, pages
283601 (year 2002)NoStop
[Miranowicz et al.(2010)Miranowicz, Bartkowiak, Wang, Liu, and Nori]MBW+10
author author Adam Miranowicz, author Monika Bartkowiak, author Xiaoguang Wang, author Yu-xi Liu, and author Franco Nori, title title Testing nonclassicality in
multimode fields: A unified derivation of classical inequalities, 10.1103/PhysRevA.82.013824 journal journal Phys. Rev. A volume 82, pages 013824 (year 2010)NoStop
[Solomon Ivan et al.(2011)Solomon Ivan, Chaturvedi, Ercolessi,
Marmo, Morandi, Mukunda, and Simon]SCE+11
author author J. Solomon Ivan, author S. Chaturvedi, author E. Ercolessi, author G. Marmo,
author G. Morandi, author N. Mukunda, and author R. Simon, title
title Entanglement and nonclassicality for multimode
radiation-field states, 10.1103/PhysRevA.83.032118
journal journal Phys. Rev. A volume 83, pages 032118 (year
2011)NoStop
[Ryl et al.(2015)Ryl,
Sperling, Agudelo, Mraz,
Köhnke, Hage, and Vogel]RSA+15
author author S. Ryl, author J. Sperling,
author E. Agudelo, author M. Mraz, author
S. Köhnke, author
B. Hage, and author
W. Vogel, title title Unified nonclassicality criteria, 10.1103/PhysRevA.92.011801 journal journal Phys.
Rev. A volume 92, pages 011801
(year 2015)NoStop
[Hillery(1987)]Hil87
author author Mark Hillery, title title Nonclassical
distance in quantum optics, 10.1103/PhysRevA.35.725
journal journal Phys. Rev. A volume 35, pages 725–732 (year
1987)NoStop
[Hillery(1989)]Hil89
author author Mark Hillery, title title Total noise and
nonclassical states, 10.1103/PhysRevA.39.2994 journal journal Phys. Rev. A volume
39, pages 2994–3002 (year 1989)NoStop
[Lee(1991)]Lee91
author author Ching Tsung Lee, title title
Measure of the nonclassicality of nonclassical states, 10.1103/PhysRevA.44.R2775 journal journal Phys.
Rev. A volume 44, pages R2775–R2778
(year 1991)NoStop
[Lee(1992)]Lee92
author author Ching Tsung Lee, title title
Moments of P functions and nonclassical depths of quantum
states, 10.1103/PhysRevA.45.6586 journal
journal Phys. Rev. A volume 45, pages 6586–6595 (year 1992)NoStop
[Lütkenhaus and Barnett(1995)]LB95
author author N. Lütkenhaus and author Stephen M. Barnett, title title
Nonclassical effects in phase space, 10.1103/PhysRevA.51.3340 journal journal Phys.
Rev. A volume 51, pages 3340–3342
(year 1995)NoStop
[Weedbrook et al.(2012)Weedbrook, Pirandola, García-Patrón,
Cerf, Ralph, Shapiro, and Lloyd]WPG+12
author author Christian Weedbrook, author Stefano Pirandola, author Raúl García-Patrón, author Nicolas J. Cerf, author Timothy C. Ralph, author Jeffrey H. Shapiro, and author
Seth Lloyd, title title Gaussian quantum information, 10.1103/RevModPhys.84.621 journal journal Rev.
Mod. Phys. volume 84, pages 621–669
(year 2012)NoStop
[Sabapathy(2016)]Sab16
author author Krishna Kumar Sabapathy, title title
Process output nonclassicality and nonclassicality depth of quantum-optical
channels, 10.1103/PhysRevA.93.042103 journal journal Phys. Rev. A volume
93, pages 042103 (year 2016)NoStop
[Dodonov et al.(2000)Dodonov, Man'ko, Man'ko, and Wünsche]DMM+00
author author V V Dodonov, author O V Man'ko,
author V I Man'ko, and author A. Wünsche, title title Hilbert-Schmidt distance and
non-classicality of states in quantum optics, 10.1080/09500340008233385 journal journal
Journal of Modern Optics volume 47, pages 633–654 (year 2000)NoStop
[Marian et al.(2002)Marian,
Marian, and Scutaru]MMS02
author author Paulina Marian, author Tudor A. Marian, and author Horia Scutaru, title title Quantifying
nonclassicality of one-mode Gaussian states of the radiation field, 10.1103/PhysRevLett.88.153601 journal
journal Phys. Rev. Lett. volume 88, pages 153601 (year 2002)NoStop
[Bures(1969)]Bur69
author author Donald Bures, title title An extension of
Kakutani's theorem on infinite product measures to the tensor product of
semifinite w*-algebras, 10.2307/1995012 journal journal Transactions of the American Mathematical
Society volume 135, pages 199–212
(year 1969)NoStop
[Nielsen and Chuang(2000)]NC00
author author M. A. Nielsen and author I. L. Chuang, @noop title Quantum Computation and
Quantum Information (publisher Cambridge University Press,
Cambridge, year 2000)NoStop
[Malbouisson and Baseia(2003)]MB03
author author J M C Malbouisson and author B Baseia, title title On the measure
of nonclassicality of field states, http://stacks.iop.org/1402-4896/67/i=2/a=002 journal
journal Physica Scripta volume 67, pages 93 (year 2003)NoStop
[Marian et al.(2004)Marian,
Marian, and Scutaru]MMS04
author author Paulina Marian, author Tudor A. Marian, and author Horia Scutaru, title title
Distinguishability and nonclassicality of one-mode Gaussian states, 10.1103/PhysRevA.69.022104 journal journal Phys. Rev. A volume 69, pages 022104 (year 2004)NoStop
[Bose(2017)]Bos17
author author S. Bose, title title Unified
Quantification of Nonclassicality and Non-Gaussianity: An Entropic
Approach, @noop journal journal
ArXiv e-prints (year 2017), http://arxiv.org/abs/1701.00064 arXiv:1701.00064 [quant-ph] NoStop
[Asbóth et al.(2005)Asbóth, Calsamiglia, and Ritsch]ACR05
author author János K. Asbóth, author John Calsamiglia, and author Helmut Ritsch, title title Computable measure of nonclassicality for light, 10.1103/PhysRevLett.94.173602 journal journal
Phys. Rev. Lett. volume 94, pages
173602 (year 2005)NoStop
[Gehrke et al.(2012)Gehrke,
Sperling, and Vogel]GSV12
author author C. Gehrke, author J. Sperling, and author W. Vogel, title title Quantification of nonclassicality, 10.1103/PhysRevA.86.052118 journal journal Phys. Rev. A volume 86, pages 052118 (year 2012)NoStop
[Vogel and Sperling(2014)]VS14
author author W. Vogel and author J. Sperling, title title Unified
quantification of nonclassicality and entanglement, 10.1103/PhysRevA.89.052302 journal journal Phys.
Rev. A volume 89, pages 052302
(year 2014)NoStop
[Ryl et al.(2017)Ryl,
Sperling, and Vogel]RSV17
author author S. Ryl, author J. Sperling, and author W. Vogel, title title Quantifying nonclassicality by
characteristic functions, 10.1103/PhysRevA.95.053825
journal journal Phys. Rev. A volume 95, pages 053825 (year
2017)NoStop
[Holevo(2012)]Hol12
author author A S Holevo, @noop title Quantum Systems,
Channels, Information (publisher De Gruyter, Berlin,
Germany, year 2012)NoStop
[Semenov et al.(2006)Semenov, Vasylyev, and Lev]SVL06
author author A A Semenov, author D Yu Vasylyev, and author B I Lev, title title Nonclassicality of
noisy quantum states, http://stacks.iop.org/0953-4075/39/i=4/a=014 journal
journal Journal of Physics B: Atomic, Molecular and Optical
Physics volume 39, pages 905
(year 2006)NoStop
[Yuen and Nair(2009)]YN09
author author Horace P. Yuen and author Ranjith Nair, title title
Classicalization of nonclassical quantum states in loss and noise: Some
no-go theorems, 10.1103/PhysRevA.80.023816 journal journal Phys. Rev. A volume
80, pages 023816 (year 2009)NoStop
[Sabapathy et al.(2011)Sabapathy, Ivan, and Simon]SIS11
author author Krishna Kumar Sabapathy, author J. Solomon Ivan, and author R. Simon, title title
Robustness of non-Gaussian entanglement against noisy amplifier and
attenuator environments, 10.1103/PhysRevLett.107.130501
journal journal Phys. Rev. Lett. volume 107, pages 130501 (year
2011)NoStop
[Helstrom(1976)]Hel76
author author C. W. Helstrom, @noop title Quantum Detection and
Estimation Theory (publisher Academic Press, New York, year 1976)NoStop
[Holevo(1973)]Hol73
author author A.S Holevo, title title Statistical
decision theory for quantum systems, 10.1016/0047-259X(73)90028-6 journal journal
Journal of Multivariate Analysis volume 3, pages 337 – 394 (year 1973)NoStop
[Bach and Lüxmann-Ellinghaus(1986)]BL86
author author A. Bach and author U. Lüxmann-Ellinghaus, title title
The simplex structure of the classical states of the quantum harmonic
oscillator, 10.1007/BF01205485 journal
journal Communications in Mathematical Physics volume 107, pages 553–560 (year
1986)NoStop
[Clifton and Halvorson(1999)]CH99
author author Rob Clifton and author Hans Halvorson, title title
Bipartite-mixed-states of infinite-dimensional systems are generically
nonseparable, 10.1103/PhysRevA.61.012108 journal journal Phys. Rev. A volume
61, pages 012108 (year 1999)NoStop
[Rahimi-Keshari et al.(2013)Rahimi-Keshari, Kiesel, Vogel,
Grandi, Zavatta, and Bellini]R-KKV+13
author author Saleh Rahimi-Keshari, author Thomas Kiesel, author Werner Vogel,
author Samuele Grandi, author Alessandro Zavatta, and author Marco Bellini, title title Quantum process
nonclassicality, 10.1103/PhysRevLett.110.160401
journal journal Phys. Rev. Lett. volume 110, pages 160401 (year
2013)NoStop
[Arvind and Mukunda(1999)]AM99
author author Arvind and author N Mukunda, title
title Bell's inequalities, multiphoton states and
phase space distributions, http://dx.doi.org/10.1016/S0375-9601(99)00471-5 journal
journal Physics Letters A volume
259, pages 421 – 426 (year 1999)NoStop
[Wang(2002)]Wan02
author author Xiang-Bin Wang, title title
Theorem for the beam-splitter entangler, 10.1103/PhysRevA.66.024303 journal journal Phys.
Rev. A volume 66, pages 024303
(year 2002)NoStop
[Husimi(1940)]Hus40
author author K Husimi, title title Some formal
properties of the density matrix, 10.11429/ppmsj1919.22.4_264 journal journal
Proc. Phys. Math. Soc. Jpn. volume 22, pages 264 (year 1940)NoStop
[Kano(1965)]Kan65
author author Yutaka Kano, title title A new phase-space
distribution function in the statistical theory of the electromagnetic
field, 10.1063/1.1704739 journal journal Journal of Mathematical Physics volume
6, pages 1913–1915 (year 1965)NoStop
[Mehta and Sudarshan(1965)]MS65
author author C. L. Mehta and author E. C. G. Sudarshan, title title Relation
between quantum and semiclassical description of optical coherence, 10.1103/PhysRev.138.B274 journal journal Phys. Rev. volume 138, pages
B274–B280 (year 1965)NoStop
[Cahill and Glauber(1969)]CG69b
author author K. E. Cahill and author R. J. Glauber, title title Density
operators and quasiprobability distributions, 10.1103/PhysRev.177.1882 journal journal Phys.
Rev. volume 177, pages 1882–1902
(year 1969)NoStop
[Leonhardt(1997)]Leo97measuring
author author Ulf Leonhardt, @noop title Measuring the Quantum
State of Light (publisher Cambridge University Press,
Cambridge, year 1997)NoStop
[Fuchs and van de
Graaf(1999)]FvdG99
author author C.A. Fuchs and author J. van de
Graaf, title title Cryptographic
distinguishability measures for quantum-mechanical states, 10.1109/18.761271 journal journal IEEE
Transactions on Information Theory volume 45, pages 1216 –1227 (year 1999)NoStop
[Audenaert and Mosonyi(2014)]AM14
author author Koenraad M. R. Audenaert and author Milán Mosonyi, title title
Upper bounds on the error probabilities and asymptotic error exponents in
quantum multiple state discrimination, http://scitation.aip.org/content/aip/journal/jmp/55/10/10.1063/1.4898559
journal journal Journal of Mathematical Physics volume 55, eid 102201 (year
2014)NoStop
[Horn and Johnson(2012)]HJ12Matrix
author author Roger A Horn and author Charles R Johnson, @noop title Matrix Analysis (publisher Cambridge University Press, year
2012)NoStop
[Dowling(2008)]Dow09
author author Jonathan P. Dowling, title title
Quantum optical metrology - the lowdown on high-N00N states, 10.1080/00107510802091298 journal journal Contemporary Physics volume 49, pages 125–14 (year 2008)NoStop
[Dodonov et al.(1974)Dodonov, Malkin, and Man'ko]DDM74
author author V.V. Dodonov, author I.A. Malkin,
and author V.I. Man'ko, title title Even and odd coherent states
and excitations of a singular oscillator, http://dx.doi.org/10.1016/0031-8914(74)90215-8 journal
journal Physica volume 72, pages 597 – 615 (year 1974)NoStop
[Gerry(1993)]Ger93
author author Christopher C. Gerry, title title
Non-classical properties of even and odd coherent states, 10.1080/09500349314551131 journal journal Journal of Modern Optics volume 40, pages 1053–1071 (year 1993)NoStop
[Haroche(2013)]Har13
author author Serge Haroche, title title Nobel lecture:
Controlling photons in a box and exploring the quantum to classical
boundary, 10.1103/RevModPhys.85.1083 journal journal Rev. Mod. Phys. volume
85, pages 1083–1102 (year 2013)NoStop
[Wineland(2013)]Win13
author author David J. Wineland, title title
Nobel lecture: Superposition, entanglement, and raising 0Schrödinger's
cat, 10.1103/RevModPhys.85.1103 journal
journal Rev. Mod. Phys. volume 85, pages 1103–1114 (year 2013)NoStop
[Ourjoumtsev et al.(2007)Ourjoumtsev, Jeong, Tualle-Brouri, and Grangier]OJT+07
author author Alexei Ourjoumtsev, author Hyunseok Jeong, author Rosa Tualle-Brouri, and author Philippe Grangier, title title
Generation of optical Schrödinger cats from photon number states, 10.1038/nature06054 journal journal Nature volume 448, pages
784–786 (year 2007)NoStop
[Lvovsky et al.(2013)Lvovsky, Ghobadi, Chandra, Prasad, and Simon]LGC+13
author author A. I. Lvovsky, author R. Ghobadi,
author A. Chandra, author A. S. Prasad, and author C. Simon, title
title Observation of micro-macro entanglement of
light, 10.1038/NPHYS2682 journal journal Nature Physics volume 9, pages 541–544 (year 2013)NoStop
[Huang et al.(2015)Huang,
Le Jeannic, Ruaudel, Verma,
Shaw, Marsili, Nam,
Wu, Zeng, Jeong,
Filip, Morin, and Laurat]HLJ+15
author author K. Huang, author H. Le Jeannic,
author J. Ruaudel, author V. B. Verma, author
M. D. Shaw, author
F. Marsili, author S. W. Nam, author E Wu, author H. Zeng, author Y.-C. Jeong,
author R. Filip, author O. Morin, and author
J. Laurat, title title Optical synthesis of large-amplitude squeezed
coherent-state superpositions with minimal resources, 10.1103/PhysRevLett.115.023602 journal journal
Phys. Rev. Lett. volume 115, pages
023602 (year 2015)NoStop
[Arndt and Hornberger(2014)]AH14
author author Markus Arndt and author Klaus Hornberger, title title Testing the
limits of quantum mechanical superpositions, doi:10.1038/nphys2863 journal journal Nature
Physics volume 10, pages 271–277
(year 2014)NoStop
[Schrödinger(1935)]Sch35
author author E. Schrödinger, title title Die
gegenwärtige Situation in der Quantenmechanik, 10.1007/BF01491914 journal journal
Naturwissenschaften volume 23, pages
823–828 (year 1935)NoStop
[Leggett(2002)]Leg02
author author A J Leggett, title title Testing the
limits of quantum mechanics: motivation, state of play, prospects, http://stacks.iop.org/0953-8984/14/i=15/a=201 journal
journal Journal of Physics: Condensed Matter volume 14, pages R415 (year
2002)NoStop
[Sanders(2012)]San12
author author Barry C Sanders, title title Review of
entangled coherent states, http://stacks.iop.org/1751-8121/45/i=24/a=244002 journal
journal Journal of Physics A: Mathematical and Theoretical volume 45, pages 244002 (year 2012)NoStop
[Hirota et al.(2001)Hirota, van Enk, Nakamura,
Sohma, and Kato]HvEN+01
author author O. Hirota, author S. J. van
Enk, author K. Nakamura,
author M. Sohma, and author K. Kato, title title Entangled Nonorthogonal States and
Their Decoherence Properties, @noop journal
journal eprint arXiv:quant-ph/0101096 (year
2001), http://arxiv.org/abs/quant-ph/0101096 quant-ph/0101096
NoStop
[van Enk and Hirota(2001)]vEH01
author author S. J. van Enk and author O. Hirota, title title Entangled
coherent states: Teleportation and decoherence, 10.1103/PhysRevA.64.022313 journal journal Phys.
Rev. A volume 64, pages 022313
(year 2001)NoStop
[Horoshko et al.(2016)Horoshko, De Bièvre, Kolobov, and Patera]HDBK+16
author author D. B. Horoshko, author S. De Bièvre, author M. I. Kolobov, and author G. Patera, title title Entanglement of
quantum circular states of light, 10.1103/PhysRevA.93.062323 journal journal Phys.
Rev. A volume 93, pages 062323
(year 2016)NoStop
[Jiang et al.(2013)Jiang,
Lang, and Caves]JLC13
author author Zhang Jiang, author Matthias D. Lang, and author Carlton M. Caves, title title Mixing
nonclassical pure states in a linear-optical network almost always generates
modal entanglement, 10.1103/PhysRevA.88.044301
journal journal Phys. Rev. A volume 88, pages 044301 (year
2013)NoStop
[Lvovsky and Shapiro(2002)]LS02
author author A. I. Lvovsky and author J. H. Shapiro, title title Nonclassical
character of statistical mixtures of the single-photon and vacuum optical
states, 10.1103/PhysRevA.65.033830 journal
journal Phys. Rev. A volume 65, pages 033830 (year 2002)NoStop
[Schumaker(1986)]Sch86
author author Bonny L. Schumaker, title title
Quantum mechanical pure states with Gaussian wave functions, http://dx.doi.org/10.1016/0370-1573(86)90179-1 journal journal Physics Reports volume
135, pages 317 – 408 (year 1986)NoStop
|
http://arxiv.org/abs/1701.07438v1 | 20170125190002 | Characterization and modeling of contamination for Lyman break galaxy samples at high redshift | [
"Benedetta Vulcani",
"Michele Trenti",
"Valentina Calvi",
"Rychard Bouwens",
"Pascal Oesch",
"Massimo Stiavelli",
"Marijn Franx"
] | astro-ph.GA | [
"astro-ph.GA"
] |
† 1School of
Physics, Tin Alley, University of Melbourne VIC 3010, Australia
2 Space Telescope Science Institute, Baltimore,
MD, 21218, USA
3Leiden Observatory, Leiden University, NL-2300 RA Leiden, Netherlands
4UCO/Lick Observatory, University of California, Santa Cruz, CA 95064, USA
5Yale Center for Astronomy and Astrophysics,
Yale University, New Haven, CT 06511, USA
The selection of high redshift sources from broad-band photometry
using the Lyman-break galaxy (LBG) technique is a well established methodology,
but the characterization of its contamination for the faintest
sources is still incomplete. We use the optical and near-IR
data from four (ultra)deep Hubble Space Telescope legacy fields to
investigate the contamination fraction of LBG samples at
z∼5-8 selected using a colour-colour method. Our approach is based on characterizing the number count
distribution of interloper sources, that is galaxies with colors
similar to those of LBGs, but showing detection at wavelengths
shorter than the spectral break. Without sufficient sensitivity at
bluer wavelengths, a subset of interlopers may not be properly
classified, and contaminate the LBG selection.
The surface density of interlopers in the sky gets steeper
with increasing redshift of LBG selections. Since the
intrinsic number of dropouts decreases significantly with increasing
redshift, this implies increasing contamination from misclassified
interlopers with increasing redshift, primarily by intermediate
redshift sources with unremarkable properties (intermediate ages,
lack of ongoing star formation and low/moderate dust content). Using
Monte Carlo simulations, we estimate that the CANDELS deep data have
contamination induced by photometric scatter increasing from ∼2%
at z∼ 5 to ∼ 6% at z∼ 8 for a typical dropout color
≥ 1 mag, with contamination naturally decreasing for a more stringent
dropout selection. Contaminants are expected to be located
preferentially near the detection limit of surveys,
ranging from 0.1 to 0.4 contaminants per arcmin^2 at =30,
depending on the field considered.
This analysis suggests that the impact of
contamination in future studies of z > 10 galaxies needs to be
carefully considered.
§ INTRODUCTION
The Lyman-break technique, first proposed by <cit.>,
transformed the identification of reliable samples of galaxy
candidates at high-redshift from broad-band imaging, and it is now
routinely used to study galaxy formation and evolution as early as
500 Myr after the Big Bang, at redshift z∼ 10 (e.g., see
). While one could consider selecting high redshift samples based on the best-fit
photometric redshift or redshift likelihood contours
<cit.>, Lyman break selection
procedures utilising cuts in colour space can be simpler to apply and offer a slight advantage in terms of operational transparency.
This makes such a selection procedure easier to reproduce by both theorists and observers,
as follow-up studies by <cit.>
show.
The idea of the method rests on the
identification of the strong spectral break introduced by neutral
hydrogen atoms along the line of sight at wavelengths shorter than
Lyman-α (1216 Å rest-frame).[Note that there is a
further suppression of the flux in the region across the 912 Å
rest-frame Lyman-continuum discontinuity, but in practice for
galaxies at z≳ 6 the non-detection starts at
λ≤ 1216 Å rest-frame.] Thus, to identify probable
sources at high redshift with high confidence, the Lyman-break
selection typically resorts to three crucial ingredients: (1) color
information from two adjacent passbands to locate the wavelength
location and measure the amplitude of the break, (2) color information
red-ward of the break to characterize the intrinsic color of the
source, and (3) evidence that sources have no flux blueward of the
break.
Many studies have used different color selections, also depending on the availability
of the photometric bands <cit.>,
showing how different choices can still lead to comparable results and assessing the strength of the method.
The Lyman-break technique has been applied very successfully to build large
samples of galaxies, especially from Hubble Space Telescope (HST)
imaging (e.g. more than 10000 sources identified at
3.5 ≲ z ≲ 11 from HST legacy fields to date; see
). Also, substantial spectroscopic follow-up work
has shown that samples are generally reliable, and that contamination from
sources with similar colors but different redshift is generally under
control <cit.>. Nonetheless, photometrically
defined samples are intrinsically affected by contamination. While
this possibility is universally acknowledged in the literature and
specific studies estimate the contamination rate of the samples
presented (e.g. ),
surprisingly few studies have been devoted to a detailed
characterization of the contamination rate and of its dependence on
survey parameters and redshift of the galaxy population. Potential
classes of contaminants that have been identified include stellar sources, low
redshift galaxies with prominent 4000 Å/Balmer breaks and dust,
extreme emission line galaxies, time-variable sources such as Supernovae,
with the first two classes of objects representing the major risks
<cit.>.
Dwarf stars have similar colors to high-redshift galaxies because of
their low surface temperatures, and can thus enter dropout samples,
especially at z≥ 7 in data that lack sufficient angular resolution to
discriminate point sources from extended light profiles
<cit.>.
At these redshifts, very low temperatures stars (sub-types M, L, T and Y) result in sources that are intrinsically
faint, and spectra in which the continuum is interrupted may
show large breaks across narrow-wavelength ranges, or in which the
flux peaks in narrow regions.
While deep medium-band observations are efficient in identifying these
stellar contaminants in seeing-limited ground-based observations <cit.>, HST imaging is generally effective in
identifying stellar objects that are detected at signal-to-noise S/N≳ 10
<cit.>. In addition, we note that at z>9, the contamination
from stars is negligible, since there are essentially no observed stars with spectral energy distributions (SEDs) that peak
at >1.4μ m and are undetectable in the optical for typical HST
surveys <cit.>.
The main source of contamination for space-based surveys is thus that
of low/intermediate redshift galaxies that have a deep break around
4000 Å rest-frame. The nature of these contaminants has not been
investigated in detail, but likely they are low-mass, moderate-age,
Balmer break galaxies at z∼ 1-3 <cit.>, possibly
with strong emission lines that contribute, or even dominate, the flux
redward of the spectral break <cit.>. To
effectively discriminate between the high-z Lyman-break and the 4000 Å/Balmer
break, <cit.> recommend using a set of non-overlapping,
but adjacent, filters, so that a clear color cut can be imposed on the
selection. Another key requirement to build a clean sample is the
availability of very deep observations blueward of the spectral break,
to distinguish between a true non-detection for an high-z object, and
a faint continuum for an interloper <cit.>.
The goal of this paper is to focus on this class of intermediate
redshift interlopers, and to empirically quantify their impact on
high-z Lyman-break galaxy (LBG) samples selected via a colour-cut method and characterize how their incidence
varies with depth and adopted selection cut.
For this, we resort to the optical and
near-infrared imaging on the GOODS South deep, GOODS North wide fields observed by the
CANDLES program <cit.> and the XDF <cit.> and HUDF09-2 <cit.>
fields. These datasets provide us high-quality
multi-wavelength observations over different
areas of the sky (from ∼ 4.7 arcmin^2 to ∼ 64.5 arcmin^2). Specifically, we focus on LBG samples from
z∼5 to z∼8, and investigate the population of galaxies that
satisfy the color-color requirements to be included in the LBG
selection based on imaging at wavelengths starting from the spectral
break, but show a clear detection in bluer filters. We define this
class of objects as interlopers, and characterize (1) their surface
density in the sky depending on luminosity and on the redshift of the
dropout selection; (2) the likelihood that fainter counterparts of the
known population of interlopers enter a LBG sample because of lack of
sufficiently deep imaging in the blue. We define this population as contaminants.
The results of our analysis, based on some of the deepest Hubble
observations available, have multiple applications. In particular,
they find applications to the estimation of the contamination rate of
other surveys, which may lack the multi-wavelength, multi-observatory
coverage, such as random pointings
and/or parallel observations (e.g. see
). Another
important application includes planning and optimization of future
observations (e.g., see for JWST and WFIRST
surveys at high-z).
This paper is organized as
follows: in Section <ref> we introduce our dataset and
construct the samples of dropouts and
interlopers. In Section <ref> we
analyze and discuss the properties of the contaminants and the
expected impact on LBG samples. In Section <ref> we discuss
how results depend on the selection criteria. We summarize and conclude in Section
<ref>. Throughout the paper, we assume
Ω_0 = 0.3, Ω_Λ = 0.7, and
H_0 = 70 km s^-1 Mpc^-1. All magnitudes are in the AB
system <cit.>.
§ DATASET AND SAMPLE SELECTION
We base our analysis on four different samples, in order to test how results change
with the field used for selection. We use the CANDELS/GOODS South deep (GSd) and
CANDELS/GOODS North wide (GNw) imaging
<cit.>, the entire XDF data set <cit.> and the HUDF09-2
<cit.>.
A summary of all the data sets used in the present study is provided in Table <ref>, along with the covered area and the 5σ depths. The latter
are drawn from <cit.> and are based on the median
uncertainties in the total fluxes (after correction to total), as
found for the faintest 20% of sources in the catalog.
As discussed by <cit.>, these depths reflect the actual sensitivity achieved in science images, as established through artificial source recovery simulations <cit.>.
We exploit the data reduction and source catalog derived by
<cit.>. Data were processed using the ACS GTO pipeline
APSIS <cit.> and the WFC3/IR pipeline WFC3RED.PY
<cit.>, with final science imaging drizzled to a
0.^''03-pixel scale. The photometric catalog has been
constructed using SourceExtractor <cit.> after
PSF-matching imaging to the F160W filter. Multi-band photometric
information is available in the following optical bands: F435W, F606W,
F775W, F814W, F850LP (hereafter , , , I_814, ,
respectively), as well as in the following near-IR bands: F098M,
F105W, F125W, F140W, F160W (hereafter Y_098, , , JH_140,
, respectively.) Complete details on data analysis and catalog
construction can be found in <cit.>.
To ensure robust results, we limit our analysis to sources with
detection in the +bands at high signal-to-noise ratio [S/N(JH_det) > 6], defined as:
S/ N= FLUX/ FLUXERR,
<cit.> where FLUX and FLUXERR are the isophotal flux and its associated
error in the combined detection band, which we indicate as
JH_det.[Note this is distinct from the F140W image,
indicated as JH_140.]
We note that adopting an even higher S/N limit [S/N(JH_det) > 8] samples would be even purer, but to the detriment
of sample statistics.[We note that within uncertainties, applying a more stringent S/N cut
yields the same results, thus we opted for S/N>6 to include in the analysis a larger number of objects. ]
In addition, with the goal of focusing on contamination from
extended sources, we remove stellar-like sources, that is all sources with CLASSTAR>0.85 measured from the detection
image [where SourceExtractor assigns CLASSTAR=0 to (very) extended sources and CLASSTAR=1 to point sources].
We then proceed to select LBG sources at high redshift (or interlopers with similar colors at low redshift).
We apply a color cut selection which is as
uniform as possible across across samples with different median redshifts, to ensure
a consistency in the analysis. The adopted criteria can be summarized as follows.
For z∼5 candidates
V_606-i_775 >1.0
z_850-H_160 <1.3
V_606-i_775 >0.75(z_850-H_160)+1.0
For z∼6 candidates
i_775-z_850 > 1.0
Y_105-H_160 < 1.0
i_775-z_850 > 0.75(Y_105-H_160) + 1.0
For z∼7 candidates
z_850 - Y_105 > 1.0
J_125 - H_160 < 0.45
z_850 - Y_105 > 0.75(J_125 - H_160) +1.0
For z∼8 candidates
Y_105 - J_125 >1.0
J_125 - H_160 <0.5
Y_105 - J_125 >0.75(J_125 - H_160)+1.0.
The color-color selection criteria listed above are not sufficient to
construct a sample of galaxies that are confidently at z≳ 5
because intermediate redshift galaxies with a prominent spectral break
such as the 4000 Å break may also fall in the color-color selection
regions typical of LBGs at higher redshift.
Following the standard practice, we use the photometry in the bands
bluer than the putative Lyman break to separate high-z sources,
which in the following we indicate as dropouts, from
lower-redshift galaxies, which we label as
interlopers. Specifically,
z∼5 dropouts (named as -dropouts) are selected as sources with
S/N()<2, z∼6 dropouts (named as -dropouts)
with S/N()<2 and S/N()<2, z∼7 dropouts (named as -dropouts)
and z∼8 dropouts (named as dropouts) with S/N(x)<2 and χ^2_x <3, where
χ^2_opt is defined as
χ^2_opt=∑_x[sgn(FLUX_x)·(FLUX_x/FLUXERR_x)^2]
In the equation FLUX_x is the isophotal flux measured in a given band, FLUXERR_x the uncertainty associated to the flux,
and x is intended to be , , and bands for -dropouts and , , and ı814 bands for -dropouts <cit.>.
In addition, following <cit.>, -dropouts are selected as sources with
S/N(ı814)<2, but ı814 is not used for computing the χ^2_opt.
Finally, if a dropout satisfies more than one dropout selection, we
assign it to the highest redshift sample. This additional cut removes
only a few percent of the sources (for example, in the GSd dataset,
we identified 38 cases out of 870 dropouts). In contrast, we do not apply this
restriction to interlopers, which thus may enter multiple selections.
On average, at most 2 interlopers appear in two selections, and none appears at
the same time in all the samples.
Finally, we highlight that the dropout sample may in general contain a
residual (small) fraction of low-z galaxies that have not been
identified through the photometric analysis, because of lack of
sufficiently deep imaging in the blue. Hereafter, we call them
contaminants. Interlopers and contaminants are the focus of our
investigation.
Note that the separation between dropouts and
contaminants for sources with a low χ^2_opt is arbitrary to a
certain extent; for example, <cit.> impose a cut at
S/N<2, while the Brightest of Reionizing Galaxies survey (BoRG,
) resorts to a more conservative threshold of
S/N<1.5 in the bluest bands. Obviously, more conservative cuts
entail the exclusion of a higher number of real high-z sources from the selections,
thus different investigators may decide to prioritize differently sample purity versus selection completeness.
§ RESULTS
In this section we present and discuss our results obtained separately for the four field we analyze (GSd, GNW, XDF, HUDF09-2). However, we only show plots
for GSd, to avoid unnecessary repetitions.
§.§ Numbers and redshift distribution of dropouts and interlopers
The color-color selection of dropouts and interlopers is shown in
Figure <ref> for samples of , , , and -dropout
sources drawn from the GSd field, with discrimination between the two classes based on the
S/N and
optical χ^2 (Equation <ref>).
We note that
photometric scatter is likely to play a
significant role in the selection of faint objects. Indeed, more than half of the 1σ error bars for the interlopers
intersect at least one boundary of the color-color selection box. Therefore, to
carry out a more comprehensive analysis, we enlarge the color-color
selection box by 0.2 mag, to check for both candidate high-z LBGs
and interlopers that slightly fail to meet the adopted selection
criteria (see also for an alternative approach based
on assigning a probability that a source belongs to the color-color
selection). In the following, we will call original selection
the one given in <ref> (dotted line in Fig.
<ref>), and enlarged selection the one
introduced in this section (dash-dotted line in Fig.
<ref>).
The most striking feature of Figure <ref> is the
relative weight of interlopers vs. dropouts, which is quantified in
Table <ref> for all the fields considered. We first focus on the GSd field.
At lower redshift
(z∼ 5), dropouts dominate the sample within the original selection,
while the opposite situation is present at higher redshift
(z∼ 8), when the interloper fraction is much higher.
We stress that this percentage is not giving a level of contamination
in our dropout sample, since the presence of sufficiently deep data at bluer wavelengths allows us to identify the interlopers.
Interestingly, the situation remains qualitatively similar
with our enlarged selection, although as expected the enlarged samples
contain a larger fraction of interlopers. If we adopted
a more conservative S/N in the sample selection (S/N<1.5), percentages
of dropouts would be systematically smaller, but comparable within 2σ
uncertainty.
The observed behaviour is mainly due to the fact that
the population of
dropouts
steady decreases in number
for the
higher redshift selections. This is primarily determined by the
evolution of the luminosity function, which decreases significantly
from z∼ 5 to z∼ 8 at all luminosities.
In contrast, the number
of interlopers in the sky remains approximately constant over a wide range of dropout selection windows.
Second order effects in the evolution of the interloper
population with the redshift of the Lyman break selection are
complex to model, and include intrinsic evolution of their luminosity
functions, change in the distance modulus and in the comoving volume
of the selection, with partial offsets among them (e.g., the decrease
in sensitivity because of an increase in the distance modulus is
offset by an increase of the comoving volume).
Similar findings are obtained when we analyze the other fields, even
though the results from the different fields highlight the presence
of sample (or “cosmic”) variance, naturally expected because of
galaxy clustering (see, e.g., ). In addition, fields
such as the XDF and HUDF09-2 have small areas, resulting in
significant Poisson uncertainty.
Finally, the difference in relative depths reached by the different surveys plays a
role, which we discuss further in Section <ref>.
To further investigate the properties of the interlopers, and test
whether these are indeed 4000 Å break sources, we resort to the
photometric redshift catalogs from the 3D-HST survey
<cit.>, which we matched to our sources based on
coordinates and band magnitudes. The expectation is that given
z_dropouts as the redshift of the Lyman-break selection, the
interlopers should be peaked at z_interlopers given by:
1+ z_interlopers=1216/4000× (z_dropouts+1).
So, for example, for z = 5 selection, the interlopers are predicted to be found at
z∼ 0.7-1.0 corresponding to the Balmer and 4000Å breaks;
for z=8 selection, the interlopers are expected at z∼ 1.6-1.9.
Taking into account uncertainties, this is broadly the case based on the photo-z analysis,
as shown in Figure <ref> for the GSd field. For this field, after the match with the 3D-HST survey, we recover ∼85%
of our sources. From this figure, and from
Equation <ref>, it is clear that as z_dropouts increases,
⟨ z_interlopers⟩ changes relatively little (Δ (z)<1).
The error on the median values narrows as we go from lower to higher redshift,
but this is mainly due to the larger sample statistics provided by the -interlopers
with respect to the -interlopers.
The fact that not all interlopers of a given selection fall exactly in
the expected redshift window, and the lower than expected median
redshift of the Y-interlopers
highlight the limitations of both our selection method and photo-z techniques.
Indeed, some real dropouts might be misclassified due to photometric scatter
and/or the photo-z estimates might not be reliable.
Similar trends are obtained also for the other
fields, even though uncertainties are very large.
Thus, extrapolating the
trend to even higher redshift samples of dropouts, such as those
accessible by JWST observations, one expects that the number of
interlopers in the color-color selection will remain relatively
constant, while dropout numbers will decrease very rapidly for
z_dropouts>10 based on theoretical modeling
(see, e.g., ).
We note that, according to the Madau-Lilly plot
(e.g. ), the star formation rate peaks at
intermediate redshift (z∼ 2). This means that the number density
of interlopers for z_interlopers≳1.85 (corresponding to
z_dropouts∼9.5 from Equation <ref>) may likely slightly
decrease with increasing redshift, although the decrease of the
interloper density will still be less steep than the one of the
dropouts because the latter have significantly higher redshift.
§.§ Surface densities of dropouts and interlopers
In the previous section we have investigated the incidence
of dropouts and interlopers at the different redshifts. We now aim at characterizing the distribution of
luminosity for these populations, to study how they compare.
Thus, we derive the surface density distributions of
dropouts and interlopers by counting the number of objects in each bin
of 0.5 magnitudes and dividing it by the area of the survey,
as shown in Figure <ref> for the GSd field. For each population,
the surface density is plotted as a function of for the z∼5
samples, for the z∼6 samples, for the z∼7 samples
and for the z∼8 samples. These are the magnitudes in the band
that best matches the 1600 Å rest-frame at that redshift for the
dropouts, as done in <cit.>. We note that for m_AB≳ 27 (see the exact value for each magnitude as
dotted line in Fig. <ref>) all our samples suffer from
incompleteness, which is the cause of the apparent decline in the number
counts of faint objects.
Surface density distributions strongly depend on the redshift and on
the population considered. At the lowest redshift, the surface
density distribution of interlopers is relatively flat with magnitude
for 20≤ ≤ 28 and there are about 0.1 interlopers per
arcmin^2 in bins of magnitude. In contrast, the distribution of
dropouts rises very steeply. As expected due to the well established
exponential cut off of the luminosity function at the bright end
<cit.>, there are
essentially no dropouts brighter than ∼24. Overall, dropouts
are much more numerous than interlopers. Similar conclusions are
reached in both the original and enlarged samples.
Moving to higher redshift, the shape of the distribution of dropouts
stays almost constant, just showing a modest steepening, but that of
interlopers considerably changes. At z∼6, interlopers and dropouts have similar distributions, with the
exception that interlopers extend toward brighter magnitudes. At z∼7 and
z∼8 interlopers are more numerous than dropouts at all
luminosities, and have a tail of objects at the bright-end as well.
Overall, the interloper distribution appears as steep as the
dropout one. This holds both for the original and the enlarged
samples.
Similar results are found also in the other fields.
It is reasonable to expect that interlopers are a subpopulation of BzK color- selected galaxies <cit.>.
This method is designed to find red, dusty or passively evolving older galaxies at z > 1.5.
We can therefore compare our derived surface densities of interlopers to those of BzK selected samples.
We use as reference the dataset presented by <cit.> for galaxies at 1.5<z<3
drawn from GOODS North and South fields and the GOODS NICMOS Survey.
That study analyses two of the same fields considered in our work and
includes HST imaging,
therefore reaching a deeper magnitude limit compared to the
many studies of BzK samples conducted from the ground (e.g. Cirasuolo et al. 2007, Hartley et al. 2008).
<cit.> quote the -magnitude distribution of all galaxies at 1.5<z<3, without splitting them into redshift bins,
so a direct comparison to our results is not possible because our
interloper samples are more localized in redshift (see Fig.<ref>). Still, to have a
first-order approximation, we treat the <cit.>
sample as uniform in redshift, and thus we simply rescale the observed
number counts to take into consideration the difference in volume
with our selections.
Figure <ref> compares the <cit.> scaled distribution
to the -band number counts for our interlopers samples in the GSd field.
At each magnitude and in each redshift bin, the BzK population is up to a factor of 10 larger than that of interlopers.
This is consistent with the assumption that not all galaxies at z∼1.5-2 are interlopers of high redshift selections,
but only the subset with a particular combination of
colors. Interestingly, we observe that the interloper counts get
steeper at faint luminosities with increasing
redshift compared to the general BzK sample. This might suggest that
interlopers evolve differently compared to the general population, but
investigating this trend in more details is beyond the scope of this
work.
§.§ Distribution of optical χ^2 for interlopers
One of the aims of our analysis is to derive an estimate of the contamination
in dropout samples. Before proceeding, we need to characterize
the distribution of the optical χ^2 values for
interlopers as a function of their S/N in the detection bands. It is
evident that the robustness of the detection is a key quantity to
distinguish between interlopers and dropouts. In fact, while less deep
observations in the redder bands give smaller S/N(JH_det) and
simply exclude galaxies both from the dropout and interloper
populations, less deep observations in the bluer bands produce lower
χ^2_opt and may induce a mis-classification, moving sources
from the interloper to the dropout population.
Figure <ref> plots the S/N(JH_det) to the χ^2_opt
for both dropouts and interlopers from the z∼ 8 selection (Lyman-break selection), for the GSd field. As expected, the interlopers have a
positive correlation between the two quantities, reflecting the finite
amplitude of the 4000 Å break. Similar results also hold for samples from
selections at lower z and drawn from the other fields.
§.§ Contamination in dropout samples
Now that we characterized the properties of interlopers, we can use
them to investigate dropout sample contamination induced by
interlopers that are mis-classified as dropouts in absence of sufficiently
deep data at bluer wavelengths. The main causes of
contamination are the impact of noise in the measurement of the optical
χ^2 and photometric scatter in the color-color selection.
To estimate the impact of noise in the measurements on the datasets we analyzed, we perform a
resampling Monte Carlo (MC) simulation on the entire photometric
catalogs and add zero-mean noise in the fluxes sampling from a
Gaussian distribution with width determined by the S/N ratio of the
simulated broadband fluxes. We then apply the dropout
selection criteria given in Sec.<ref>, and quantify the
number of interlopers and dropouts in the simulated sample.
We repeat the procedure 500 times to
collect statistics and we find that on average increasing the noise we obtain systematically larger
fractions of interlopers at any redshift than those obtained with the original catalogs (Tab.<ref>).
The average statistics are given in Tab.<ref> for each field separately.
This test emphasizes the need of precise photometry to robustly distinguish between dropouts and interlopers.
We note that if instead of using the entire catalogs as starting point of the MC experiment
we used only a combination of the dropout and interloper enlarged samples, we would get results in agreement
within the errors, indicating that actually only the sources close to the boundaries of our selection boxes
can contaminate the samples.
As the next step, we also consider the photometric scatter and
perform a more sophisticated
resampling MC simulation on the photometric
catalogs. Specifically, for each dropout selection, we uniformly
sample with repetition the luminosity in the detection band from the
catalog of enlarged interlopers, extracting a simulated catalog with
the same size as the original one. Next, we assign to each of these
objects the broadband colors of a random galaxy from the same
catalog (again using uniform sampling probability with
repetition), and we add zero-mean noise in the fluxes sampling from a
Gaussian distribution with width determined by the S/N ratio of the
simulated broadband fluxes. We use as our starting point a catalog that includes
all the -interlopers detected in our four fields (enlarged samples),
in order to consider a population that is relatively homogenous, but statistically significant. Note that for this second test
it would not be appropriate to resort to the photometric catalogs of all sources,
since the MC procedure effectively “re-shuffles” colors of galaxies, thus a relatively uniform starting sample is needed.
Finally, we perform the photometric
analysis of the catalog to quantify the number of interlopers in the enlarged sample that are
classified as dropouts. After repeating the procedure 500 times to
collect statistics, for example for the GSd field we find that on average:
* The z∼ 5 selection has 17±4 interlopers entering the
-dropout sample as contaminants, for an estimated contamination rate
f_c∼ 17/648 ∼ 2.6%;
* The z∼ 6 selection has 7±2 interlopers entering the
-dropout sample as contaminants, for an estimated contamination rate
f_c∼ 7/172 ∼ 4.0%;
* The z∼ 7 selection has 2±1 interlopers entering the
-dropout sample as contaminants, for an estimated contamination rate
f_c∼ 2/33 ∼ 6.0%;
* The z∼ 8 selection has 1±1 interlopers entering the
-dropout sample as contaminants, for an estimated contamination rate
f_c∼ 1/17 ∼ 5.9%.
Overall, results from the different fields are in agreement, indicating that the contamination
is always only a few percent in all samples, and it increases with increasing
redshift. These results are also in broad agreement with other literature estimates, as it will be discussed in
Sec.<ref>.
These results are clearly illustrating that while the number of
mis-classified interlopers remains relatively constant across
different samples, as the redshift increases, the relative weight
compared to the number of dropouts grows significantly. Interestingly,
these estimates are consistent with the predictions from the
contamination model based on source simulations from an extensive SED
library encompassing a wide range of star formation histories,
metallicities, and dust content and a combination of an old and a
young population <cit.>, and used for the BoRG survey
sample purity analysis (e.g., see
).
Applying the color cuts adopted in the current work, the model predicts a contamination of 0.7% at
z∼5, 1.6% at z ∼6, 3.5% at z∼7 and 7.3% at
z∼8.
§.§.§ Contamination at z∼8
We now focus only on the
highest redshift selection (z∼ 8), since it contains the largest
number of interlopers, and investigate more in detail the level of contamination
in the different surveys.
Using a similar approach to that presented in the previous section, we can use the multi-band photometric
catalog of all the interlopers identified in the different fields (enlarged sample),
combined with an extrapolation of
the surface number densities of interlopers, in order to estimate the
contamination in different surveys, assuming that the spectral energy
distributions of the interlopers are
representative of the general population. We simulate a series of surveys with
relative depths in the different bands similar to those used in our analysis, but different values of
5σ depth.
For the simulation, we compute the brightness distribution of all
the -interlopers in the enlarged sample (Fig.<ref>).
For a basic characterization of the luminosity distribution, we fit
the populations using a power law through a Markov Chain Monte Carlo
method. The best power law fit of the sample is
log(N/arcmin^2/Δ(mag)) = (0.35±0.1)× + (-9.0±0.4). As shown in Fig.<ref>, while trends for
the GSd and HUDF09-2 are compatible within the errors, the GNw field
seems to have a systematically larger number of interlopers, while the
XDF has a systematically lower number. This plots confirms that there
is significant cosmic variance across fields. Quite interestingly, GNw
not only has an excess of interlopers, but there is also an excess of
genuine high redshift candidates reported by many studies
<cit.> and across a range of
redshifts. Further medium/deep lines of sights beyond those available
in the HST archive would be very interesting to investigate in greater
detail this correlation.
We then assume that all interlopers in the sky follow a power-law fit
to the number counts distribution, extrapolated in the magnitude range
=22-31, and we sample from this distribution a catalog of object
luminosities. Next, we proceed to estimate the contamination in each
field separately. We assign to each simulated object the broadband
colors of a random galaxy from one of our interloper sample (GSd, GSw,
XDF, HUDF-092), and add to the signal in each band a Gaussian noise
drawn from a distribution with dispersion associated to the S/N that
would be achieved in the simulated survey (following the depths
reported in Tab.<ref>). Finally, we apply the dropout
selection criteria given in Sec.<ref>, and quantify the
number of simulated interlopers that satisfy the dropout
selection. This gives us our best estimate of the surface number
counts of contaminants in each simulated survey.
For the GSd simulation, based on the extrapolated number density of
interlopers for 22≤≤31 and an assumed area of 64.5
arcmin^2, we extract 4390 simulated galaxies per realization, and
we repeated the simulation 500 times. On average, we find that a
realization has 4±2 of these simulated interlopers appearing
as dropouts, 120±10 are correctly classified as interlopers, while
the remaining either turn out to have S/N(JH_det)<6, or colors
outside the selection box, and do not enter in the interloper or
dropout sample. To illustrate the MC experiment, results are shown in
Fig. <ref> for one of the 500 realizations. This implies that the
probability of
mis-classifying a specific (enlarged sample) interloper as dropout is
very small p∼ 0.001.
Figure <ref> shows the magnitude distribution
for observed dropouts in the GSd, selected with the criteria
given in Sec.<ref>. Overplotted is also the average magnitude
distribution of the simulated contaminants (i.e. interlopers appearing
as dropouts after the MC experiment). It can be clearly seen that the
fraction of contamination increases at fainter magnitudes, consistent
with the explanation that photometric scatter is the main cause of
contamination.
Repeating the MC experiment for the other fields, we found that for
the GNw/XDF/HUDF09-2 simulation, on average, a realization has
1±1/1±1/1±1 of the simulated interlopers appearing as
dropouts, and 53±7/45±6/25±5 that are correctly classified
as interlopers. This implies that the probability of mis-classifying
a specific (enlarged sample) interloper as dropout is very small in
all the fields (p∼ 0.0002/0.003/0.003). The GNw field is the one
with the lowest contamination. As shown in Tab.<ref>, this
field has the deepest relative depth blueward of the dropout band
compared to the detection band, and clearly this allows more efficient
identification of interlopers, minimizing contamination of the dropout
sample. In fact, even though the other fields have deeper photometry,
their photometric limits in the blue bands are relatively shallower
compared to the limits of the detection (red) bands, inducing a higher
probability of misclassification of interlopers as dropouts (and
therefore higher contamination).
Overall, our analysis also suggests that given a survey, the higher the S/N
in the detection, the higher the likelihood that the dropout is a
LBG. Thus we are fully consistent with
the high sample purity inferred from spectroscopic follow-up studies
of LBG samples in ultradeep surveys
(e.g. ) since spectroscopic investigations are
limited to the brighter galaxies (e.g. m≲ 27.5).
To explore how contamination changes as a function of the limiting
depth of the survey, we repeat the MC experiments varying the band
magnitude limit and scaling the limits in all other bands keeping the
relative depths constants
The results are shown in Figure <ref> which
summarizes the level of contamination per arcmin^2 versus limiting
magnitude. As expected, the contamination increases toward fainter
magnitudes, because there is a higher number of potential contaminants,
and strongly depends on the relative limiting depths of the
different bands. Overall, the level of contamination ranges
between ∼0 and ∼0.4 contaminants per arcmin^2 in the
magnitude range =26.5-30.
As already mentioned, the relative depth of the GNw seems to do the best job in
discriminating interlopers and dropouts, therefore the contamination is the smallest.
Our conclusion on the presence of a significant level of contamination
near the detection limit of a survey because of significant
photometric scatter is indirectly supported by a cross-matching
analysis of the catalogs for and -dropouts in the XDF/GOODS-South
published by <cit.> and <cit.>, which shows
that less than 50% of the sources appear in both catalogs within
one magnitude of the survey detection limit, even though the derived
luminosity functions are similar (see ).
§.§ Properties of the -contaminants
To investigate what are the properties of the objects that can migrate
from the interloper to the dropout sample when their photometry is
rescaled to fainter fluxes (and therefore lower S/N), we report in Figure
<ref> some examples of interlopers in the
enlarged sample that after the MC dimming experiment appear as dropouts
at z∼8. As it is clear from the SEDs, these objects are bright in the band of detection, but
their 4000 Å is sufficiently deep that the faint flux at bluer
wavelengths is not detected after the typical dimming of ∼ 3-4
mag that our MC experiment assigns to simulated objects near the XDF
detection limit. The figure also highlights the key
assumption (and potential limitation) of our approach, that is the use
of SEDs observed in brighter galaxies for
modeling the colors of fainter sources.
To further characterize the interlopers and especially those that after the dimming appear as dropouts,
we derive their stellar population properties by fitting the observed
SEDs from the F435W to the F160W or to the Spitzer-IRAC 8μ m photometry,[We resort to the IRAC photometry
from CANDELS <cit.>, which we matched to our sources based on coordinates. IRAC photometry is available only for galaxies in GSd] depending on availability, using FAST <cit.>.
We adopt <cit.> models assuming exponentially declining Star formation histories (SFHs) of the form SFR ∝exp-t/τ,
where SFR is the star formation rate, t is the time since the onset of star formation, and τ sets
the timescale of the decline in the SFR, solar metallicity, a <cit.> dust law, and a <cit.>
Inital Mass Function (IMF). We allow log (τ /Gyr) to range between 7.0 and 10.0 Gyr,
log (t /Gyr) between 7.0 and 10.1 Gyr, and A_V between 0 and 4 mag.
When possible, we also use photometric redshifts from the 3D-HST survey
<cit.>, to further constrain the fits.
Overall, across the different field, 273 -interlopers appear as contaminants in at least one out of the 500 MC
realizations. We expect this sample to be representative of the entire contaminant population.
A summary of the typical properties of the interlopers and of those that might
contaminate the dropout samples at z∼8 is given in
Table <ref>. The distributions of some properties are also presented in Fig. <ref>. Interestingly,
both interlopers and contaminants have
intermediate ages, low level of ongoing star formation and only a
moderate dust content. Both medians value and Kolmogorov-Smirnov tests support the similarity
of the distributions. As expected given the fact that our contaminants are drawn from
the enlarged sample, which by construction includes objects up to 0.2 mag bluer than interlopers,
interlopers have a noticeably redder
-color than contaminants.
These estimates are consistent with the typical values of dust content and ages obtained from the
contamination model based on source simulations from an extensive SED library used in Sec. <ref>.
This result suggests that it is
reasonable to expect that such properties can scale from the
intermediate mass objects used as templates to the lower mass and
fainter sources that would be contaminants in actual datasets.
§ CONTAMINATION ESTIMATES IN THE LITERATURE
In the literature, there have been various studies that tried to give an estimate
of the contamination in the dropouts sample, with the intent to correct the estimates
of the luminosity functions, but not to characterize the properties of the contaminants.
Each of these studies has used a different definition for the
dropout/interloper sample and evaluated the contamination in a different way, so a direct comparison among the different findings is not always possible
and have to be considered carefully.
Here, we present a summary of some important literature results and then we will redo our analysis using the same
selection criteria adopted by <cit.>, with the aim of directly compare our and their findings.
<cit.> have estimated the impact of a scattering into a color selection windows owing to the impact of noise
by repeatedly adding noise to the imaging data from the deepest fields, creating catalogs, and then attempting to reselect sources from these fields in exactly the same manner as the real observations. Sources that were found with the same selection criteria as the real searches in the degraded data but that show detections blueward of the break in the original observations were classified as contaminants.
They estimated the likely contamination by using brighter, higher-S/N sources in the XDF to model contamination in fainter sources.
They estimated a contamination rate of 2±1%, 3±1%, 6±2%, 10±3%, and 8±2% at z∼4, z∼5, z∼6, z∼7, and z∼8, respectively.
These results are in agreement to ours (see also Sec.<ref>)
and to those found in other recent selections of sources in the high-redshift universe <cit.>.
<cit.> found instead larger values of contamination. They estimated the contamination by artificially dimming lower redshift sources in their catalog, to see if the increased photometric scatter allows them to be selected as high-redshift candidates.
For sources with 25<<27, they estimated a contamination fraction of 4.5%, 8.1%, 11.4%, 11.1%, and 16.0% at z∼4, z∼5, z∼6, z∼7, and z∼8, respectively. For fainter sources with 26<<29, the contamination fraction increased to 9.1%, 11.6%, 6.2%, 14.7%, and <4.9% at z∼4, z∼5, z∼6, z∼7, and z∼8, respectively.
These fractions are in line with the estimates from the stacked probability distribution curves <cit.>.
<cit.>, by studying the space density of the potentially contaminating sources, found that dusty star-forming galaxies at z < 5 might contaminate z > 5 galaxy samples at a rate of <1%. Such fraction might increase when photometric scatter is applied to faint, red galaxies, making it easier for them to scatter into high-z samples <cit.>.
To minimize the probability of contamination by low-redshift interlopers, the BoRG strategy was to impose a conservative non-detection threshold of 1.5σ on the optical-band data <cit.>.
In order to estimate the residual contamination, from the <cit.> data reduction they first identified F098M dropouts with F125W<27 considering a version of the GOODS F606W image degraded to a 5σ limit F606W = 27.2 to match the relative F125W versus F606W BoRG depth. They then checked for contaminants by rejecting F098M dropouts with S/N > 2 in either B, V, or i (at their full depth). They estimated approximately 30% contamination, which is much higher to what we found, but in good agreement with the estimate based on the application of the color selection to libraries of SED models <cit.>.
Note that the key difference between BoRG and other surveys is that BoRG only has one blue band, making the identification of contaminants more difficult.
§.§ The Bouwens et al. 2015 cuts
§.§.§ Sample selection
We now repeat our analysis adopting the cuts proposed by <cit.>,
in order to test how a different sample selection may alter our conclusions.
For the sake of brevity, we report our analysis performed only on the CANDELS/GOODS South deep imaging
<cit.>. The parent catalog is the one presented in <ref>.
We apply the same cut in S/N(JH_det) and stellarity index described in <ref>.
We then apply the
following color selection criteria for samples of LBGs in the redshift
range z∼5-8, based on <cit.>.
For z∼5 candidates
V_606-i_775 >1.2
z_850-H_160 <1.3
V_606-i_775 >0.8(z_850-H_160)+1.2
For z∼6 candidates
i_775-z_850 > 1.0
Y_105-H_160 < 1.0
i_775-z_850 > 0.78(Y_105-H_160) + 1.0
For z∼7 candidates
z_850 - Y_105 > 0.7
J_125 - H_160 < 0.45
z_850 - Y_105 > 0.8(J_125 - H_160) + 0.7
For z∼8 candidates
Y_105 - J_125 >0.45
J_125 - H_160 <0.5
Y_105 - J_125 >0.75(J_125 - H_160)+0.525.
To distinguish between interlopers and dropouts, we use the following cuts in S/N.
According to <cit.>,
-dropouts are selected as sources with S/N()<2, -dropouts
with S/N()<2 and either (- )> 2.7 or S/N()<2, -
and dropouts with S/N(x)<2 and χ^2_x <3, where
x is intended to be , , and bands for -dropouts and , , and ı814 bands for -dropouts (Eq.<ref>).
In addition, -dropouts are also selected as sources with either
(ı814-)>1.0 or S/N(ı814)<1.5.
As before, if a dropout satisfies more than one dropout selection, we
assign it to the highest redshift sample. This additional cut removes
∼ 30 sources from the - selection, while in the other cases at
most a few sources are removed. In contrast, we do not apply this
restriction to interlopers, which thus may enter multiple selections.
However, only very few galaxies enter simultaneously more than one selection,
therefore results are not driven by this subpopulation of duplicates.
§.§.§ Results
The color-color selection of dropouts and interlopers adopting the <cit.>
selection is shown in
Figure <ref> for samples of , , , and -dropout and interloper
sources. As done in 3.1, we define an original and an enlarged selection, by simply
enlarging the color-color selection box by 0.2 mag, to check for both candidate high-z LBGs
and interlopers that slightly fail to meet the usual selection criteria.
Given the fact that the <cit.> criteria on the Lyman-Break color are less strict than those presented in 1, many more galaxies enter both
the dropout and the interloper samples, at any redshift.
Table <ref> presents a summary of the incidence of each population.
Comparing the fractions to those presented for the same field in Table <ref>,
we find that the fraction of -dropouts is the same (the changes to the selection are really minor), while at higher
redshifts dropouts fraction are considerably smaller. This indicates that a more strict selection criteria does reduce the number
of interlopers, even though it simultaneously reduces the sample of dropouts.
Therefore each selection should be a good compromise between
purity and completeness.
Similarly to what we did in the previous section, we derive the surface
density distributions of dropouts and interlopers in our <cit.> like selection.
Results are qualitatively similar to those found for a constant color cut, with
the shape of the distribution of dropouts staying almost constant with increasing redshift, while that of interlopers considerably steepening.
Inspecting a S/N(JH_det) vs. χ^2_opt diagram for the <cit.>
sample selection, it emerges that the enlarged
sample appears to have objects distributed along two different
sequences. To further explore this population, in Fig. <ref>
we focus on interlopers only and add the information on their near-IR
colors. It appears evident that most of the objects in the second
sequence are characterized by intermediate colors in -and red
colors in -(0.5<-<0.7). This demonstrates the utility
of excluding candidates that are too red <cit.>. Similar results also hold for samples from
selections at lower z.
§.§.§ Contamination in dropout samples
Mimicking the analysis in the previous section, we investigate the level
of contamination in the <cit.> dropout sample induced by
interlopers that are mis-classified as dropouts in absence of sufficiently
deep data at bluer wavelengths.
First we estimate the impact of noise in the measurement of the optical
χ^2 and photometric scatter in the color-color selection performing a
resampling MC simulation on the photometric
catalogs. As before, for each dropout selection, we uniformly
sample with repetition the luminosity in the detection band from the
catalog of enlarged interlopers, extracting a simulated catalog with
the same size as the original one. Next, we assign to each of these
objects the broadband colors of a random galaxy from the original
parent catalog (again using uniform sampling probability with
repetition), and we add zero-mean noise in the fluxes sampling from a
Gaussian distribution with width determined by the S/N ratio of the
simulated broadband fluxes. Finally, we perform the photometric
analysis of the catalog to quantify the number of interlopers in the enlarged sample that are
classified as dropouts. After repeating the procedure 500 times to
collect statistics, we find that on average:
* The z∼ 5 selection has 7±1 interlopers entering the
-dropout sample as contaminant, for an estimated contamination rate
f_c∼ 7/446 ∼ 1.5%;
* The z∼ 6 selection has 4±1 interlopers entering the
-dropout sample as contaminant, for an estimated contamination rate
f_c∼ 4/167 ∼ 2.5%;
* The z∼ 7 selection has 5±1 interlopers entering the
-dropout sample as contaminant, for an estimated contamination rate
f_c∼ 5/53 ∼ 9.4%;
* The z∼ 8 selection has 7±1 interlopers entering the
-dropout sample as contaminant, for an estimated contamination rate
f_c∼ 7/45 ∼ 15.3%.
These results are clearly illustrating that while the number of
mis-classified interlopers remains relatively constant across
different samples, as the redshift increases, the relative weight
compared to the number of dropouts grows significantly. These estimates
are systematically larger than those presented in the previous section,
indicating how in the selection cuts proposed <cit.> many more interlopers
might be incorrectly classified as dropouts. Nonetheless,
as the sample presented in the previous section, these
estimates are consistent with the predictions from the
contamination model based on source simulations from an extensive SED
library <cit.>. The model predicts a contamination of 3.1% at
z∼5, 0.5% at z ∼6, 6.3% at z∼7 and 12.4% at
z∼8. It forecasts a higher contamination at z∼5
compared to z∼6 that our MC experiment does not capture. This is
likely due to the fact that, being based on the observed data, the MC
at z∼5 is able to identify contaminants only when the objects
show a detection in the single band () blueward of the break,
unlike the model based on a template library.
Therefore, it appears evident that the choice of the color cuts noticeably alters
the fraction of dropouts and interlopers and the estimates of contamination.
The <cit.> selection criteria ensure a larger number of dropouts at all
redshifts, but unavoidably also a larger number of interlopers.
As a consequence also the estimated contamination is considerably higher.
§ SUMMARY AND CONCLUSIONS
In this paper we investigated the contamination of photometrically
selected samples of high redshift galaxies. Our focus has been on the
widely adopted Lyman-break technique, using high quality multi-band
imaging from the Hubble CANDELS surveys (GOODS Deep South and
GOODS Wide North), the XDF and the HUDF09-2. In our analysis we distinguished between
dropouts, that is sources that formally satisfy all the selection
criteria of LBGs, and interlopers, that is sources
with similar colors redwards of the spectral break, but showing a
detection at bluer wavelengths. Because of finite photometric
precision, when no sufficiently deep data at bluer wavelengths
are available, a (small) fraction of interlopers can be mis-classified as
dropouts, and contaminate the selection. Hence we indicated these
objects as contaminants.
The class of interlopers/contaminants that we studied is that of
intermediate redshift galaxies with a prominent 4000 Å/Balmer break, which
are the most common among interlopers based on redshift estimates from
the 3DHST survey (see Figure <ref>).
Our key results are the following:
* Adopting a constant cut on the strength of the Lyman break across different redshifts,
the number counts of interlopers shows an increase in number with z of at most a factor
of 2. In contrast, in selections where the cut on the strength of the Lyman break varies with
redshift <cit.>, the number counts of interlopers increase significantly from z∼5 to z∼8.
This suggests that cleaner samples of dropouts can
be achieved by requesting a clear spectral break, which reduces the
number of interlopers more significantly than the number of
dropouts.
* The surface density of interlopers in the sky remains
approximately constant over the range of dropout selection windows
considered in this study (that is dropouts from z∼ 5 to
z∼ 8), for a given depth of the survey and for a uniform cut in
the color containing the Lyman break. This is because the population
of interlopers resides at lower redshift and its average redshift
evolves more slowly (by a factor ∼ 0.3) compared to that of the
dropouts (see Equation <ref> and
Figure <ref>). Thus, since the number of dropouts evolves
rapidly with redshift, the ratio of interlopers to dropouts grows
significantly with increasing redshift.
* While the shape of the surface density distribution of dropouts
stays relatively constant with increasing redshift, that of interlopers possibly gets steeper.
Interlopers also tend to
have a tail at the bright end.
* Using a Monte Carlo resampling of the interloper population we
estimate a contamination of the dropout samples in all the fields,
ranging from ∼ 2% at z∼ 5 to ∼ 6% at z∼ 8 for
the GSd field, with a clear trend of increasing contamination for
higher redshift dropout samples. In the other fields, the
contamination is similar, but systematically lower. The lowest level
of contamination is found for GDw, indicating that having relatively
deeper blue bands compared to red bands is the most effective toop
to properly separate interlopers from dropouts.
* Extrapolating with a power-law the interloper number counts
distribution at the faint end to simulate ultradeep surveys, we
derive that the contamination increases toward fainter magnitudes,
and ranges from 0.1 to 0.4 contaminants per arcmin^2 at =30,
depending on the field considered. Generally we find that these
contaminants are located near the detection limit of the survey.
* By means of SED modeling, we characterized the stellar population
properties of the interlopers that may contaminate the dropout
sample, and found objects with intermediate ages (∼ 1 Gyr at
z∼ 1.5-2), very-low levels of ongoing star formation, and
relatively low dust content.
Our results and contamination estimates are limited by restrictions to
galaxy-like sources and to Gaussian noise. The former is not likely to
be an issue for space based observations with high angular resolution,
but it might affect ground-based surveys that do not have the ability
to discriminate between compact galaxies and stars. The assumption of
normally distributed errors is again likely to underestimate the
occurrence of rare, extreme events of photometric scatter, since data
are likely to have an excess of noise compared to a normal
distribution in their tails <cit.>. Thus, our
results are to be considered lower limits for the contamination of
dropout samples. Also, while we focused on Lyman-break selection, a
similar analysis would be expected to hold qualitatively if we had
considered photometric redshift estimates to construct the sample of
dropouts and interlopers, with the added complication of leaving more
degrees of freedom in defining the selection and the separation
between the two samples.
Overall our key conclusion is that the dropout selection of high
redshift sources leads currently to samples with high purity, but the
purity degrades when the number of dropouts becomes much smaller than
the number of interlopers. We demonstrated this clearly for the
-dropout sample from space observations over deep fields. A
qualitatively similar conclusion on an increase of the contamination
fraction is expected to hold for ground-based surveys over large areas
as well, targeting the bright-end (m∼ 24-26) of the galaxy
luminosity function at high-z, since the relative number of
dropouts versus interlopers is significantly suppressed. However, in
this case the objects are so bright that targeted follow-ups such as
spectroscopic observations, should be able to discriminate between
high-z sources and contaminants.
Finally, extrapolating our results to future surveys at z>10, we
highlight the need to consider carefully the contamination of the
dropout samples, since the number of objects expected at such early
times will be orders of magnitude smaller than the number of
interlopers with similar colors, and thus the contamination might
exceed 50%. Fortunately, in this respect, the capability of
JWST to observe efficiently at rest-frame
optical wavelengths for sources at z>10 will greatly help in
continuing to select samples of photometrically selected objects with
high purity, similar to the role played currently by Spitzer IRAC
imaging to validate samples of bright dropouts at z∼ 8-10
identified by HST <cit.>.
We thank the anonymous referee for their insightful remarks
that helped us to improve the paper. B.V. acknowledges the support from an Australian
Research Council Discovery Early Career Researcher Award
(PD0028506). This work was partially supported by grants ARC
FT130101593, and HST/GO 13767, 12905, and 12572.
Facilities: HST(ACS), HST(WFC3).
apj
|
http://arxiv.org/abs/1701.09078v2 | 20170127044107 | A unified description of collective magnetic excitations | [
"Benjamin W. Zingsem",
"Michael Winklhofer",
"Ralf Meckenstock",
"Michael Farle"
] | cond-mat.other | [
"cond-mat.other",
"00A79, 74J05, 74H10, 78A40",
"J.2; I.6"
] |
Faculty of Physics and Center for Nanointegration (CENIDE),
University Duisburg-Essen, 47057 Duisburg, Germany
Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute,
Forschungszentrum Jülich GmbH, 52425 Jülich, Germany
Faculty of Physics and Center for Nanointegration (CENIDE),
University Duisburg-Essen, 47057 Duisburg, Germany
IBU/School of Mathematics and Science, University of Oldenburg,
Carl-von-Ossietzky-Strasse 9-11, D-26129 Oldenburg, Germany
Faculty of Physics and Center for Nanointegration (CENIDE),
University Duisburg-Essen, 47057 Duisburg, Germany
Faculty of Physics and Center for Nanointegration (CENIDE),
University Duisburg-Essen, 47057 Duisburg, Germany
Center for Functionalized Magnetic Materials, Immanuel Kant Baltic Federal University,
236041 Kaliningrad, Russian Federation
In this work, we define a set of analytic tools to describe the dynamic response of the magnetization to small
perturbations, which can be used on its own or in combination with micromagnetic
simulations and does not require saturation. We present a general analytic description of the ferromagnetic high
frequency susceptibility tensor to describe angular
as well as frequency dependent ferromagnetic resonance spectra and account for asymmetries
in the line shape. Furthermore, we expand this model to reciprocal
space and show how it describes the magnon dispersion. Finally
we suggest a trajectory dependent solving tool to describe the equilibrium
states of the magnetization.
A unified description of
collective magnetic excitations
Michael Farle
Received: date / Accepted: date
=========================================================
§ INTRODUCTION
Many solutions of the Ferromagnetic high frequency susceptibility
(Polder-) tensor <cit.>
have been formulated. Mostly these solutions represent simplified versions to
suit particular problems, such as a certain energy landscape, a certain
kind of coupling or a specific symmetry. In this work we formulate
a generalized linearization of the Landau-Lifshitz-Gilbert <cit.> equations
(LLG), which does not require symmetry assumptions and is applicable regardless
of the coupling as well as the types of damping present in the system. It allows one to start with a general formulation of the free energy density of ferromagnets, including all magnetic inderactions which might be present in the magnetic material, e.g. exchange, dipole-dipole, Dzyaloshinskii-Moriya interaction, anisotropies, etc. and can be expanded to antiferromagnetis and multilayer artificial antiferromagnets in the usual way <cit.>.
The conventional approaches mostly involve solving large systems of
equations, linearizing at different points in the calculation
in order to formulate the high frequency susceptibility <cit.>. This is avoided
here, by applying a straight forward linearization through series
expansion of the LLG. Furthermore this algorithm is formulated to
cover the entire magnon dispersion, including ferromagnetic resonance
modes as well as traveling waves with non-zero wave-vectors.
In the second part we present a model that can be used to calculate
the equilibrium orientations of the magnetization, using an algorithm
that closely resembles the actual measurement procedures used in ferromagnetic
resonance measurements. Following a gradient of the energy landscape imposed on the magnetization, this model can be used
to describe meta-stable and stable equilibrium states of the magnetization even for fields that are not applied along symmetry directions.
Neglecting thermal fluctuations, a combination of those models can
be used to make accurate predictions about the magnetodynamic properties
of ferromagnetic systems.
§ ANALYTIC MODEL
§.§ The ferromagnetic high frequency susceptibility tensor
In the derivation presented here we assume a system that is described
by one macro-spin <cit.> M⃗
which is subjected to one effective magnetic
field B⃗ yielding one high frequency susceptibility tensor
χ_hf. This model therefore
as derived here is designed to describe a fully saturated sample.
It is not limited to a single magnetization though and can be applied
to a set of macro-spins where the local effective magnetic field is known at each site. In that case the total high frequency susceptibility would be given as χ=n∑χ_n where χ_n is
the high frequency susceptibility of the n^th macro-spin
M⃗_n due to the field B⃗_n it is exposed to. This
can be used for non saturated systems and samples with inhomogeneous
magnetization or magnetic nanoparticle configurations.
In order to derive the full tensor we start from the Landau-Lifshitz-Gilbert
Equation <ref> using the Polder-Ansatz <cit.>
as shown in eq. <ref>
L⃗-γM⃗×B⃗-α/MM⃗×Ṁ⃗̇-Ṁ⃗̇=0
[ M⃗(t) M⃗(M,θ_M,φ_M)+m⃗exp(ω t); B⃗(t) B⃗(B,θ_B,φ_B)+b⃗exp(ω t) ]
Considering the dynamic excitation and response quantities m⃗
and b⃗ to be sufficiently small, the ferromagnetic high-frequency
susceptibility χ_hf can be
expressed as a linear tensor
m⃗=χ_hf·b⃗
where linear means, that χ_hf
does not depend on m⃗ and b⃗. This is usually the
case for microwave fields ‖⃗ ⃗b⃗‖⃗<[1]mT.
To obtain the magnetic flux that the magnetization is exposed to,
we consider the magnetic contribution to the free energy per unit volume
F(B⃗_appl,M⃗) where B⃗_appl
corresponds to the applied magnetic field and M⃗ is the magnetization
vector as discussed in the literature (See for example <cit.>).
The Helmholtz free energy density F usually contains an anisotropic
contribution due to the crystal lattice, particularly spin orbit interaction,
as well as several other contributions that arise from surfaces/interfaces,
the shape of the sample and the Zeeman-Energy. In this generalized
approach the nature of these magnetic energies almost does not matter. The only
necessary requirement is that the first and second derivatives used
in eq. <ref> exist. The total magnetic flux
is then given as
[ B⃗(t)=∇_M⃗F(B⃗_appl,M⃗); +J_M⃗(∇_M⃗F(B⃗_appl,M⃗))·m⃗exp(ω t); +b⃗exp(ω t) ]
where ∇_M⃗F(B⃗_appl,M⃗)
is the anisotropy-field and J_M⃗(∇_M⃗F(B⃗_appl,M⃗))
the response function that accounts for a field caused by a precessing
m⃗, where ∇_M⃗ is the gradient in M⃗
and J_M⃗ the Jacobian matrix in
M⃗. Using this we can now go back to eq. <ref> and
obtain
[ L⃗→L⃗(b⃗,m⃗)=-γM⃗(t)×B⃗(t); -α/MM⃗(t)×Ṁ⃗̇(t)-Ṁ(t)!=0 ∀ t ]
which defines the hyper-plane in which all dynamic motion of the
magnetization takes place. Since m⃗ and b⃗ are small,
as defined in <ref> we can now approximate L⃗(b⃗,m⃗)
by using a Taylor-expansion around L⃗(b⃗=0⃗,m⃗=0⃗)
to obtain
L⃗(b⃗,m⃗)≈0⃗L⃗(0⃗,0⃗)+J_b⃗,m⃗·(b_x,b_y,b_z,m_x,m_y,m_z)^⊤
This leads to the system of equations <ref>,
0⃗!=J_b⃗,m⃗·(b_x,b_y,b_z,m_x,m_y,m_z)^⊤
where J_b⃗,m⃗=J_(b_x,b_y,b_z,m_x,m_y,m_z)^T(L(b⃗,m⃗))
is the Jacobian matrix of L⃗ in b⃗ and m⃗.
Eq. <ref> can then be further decomposed into
0⃗= J_b⃗·b⃗+J_m⃗·m⃗
m⃗= -((J_m⃗)^-1·J_b⃗)·b⃗
where J_m⃗ and J_b⃗
are the Jacobian matrices in m⃗ and b⃗ respectively.
By comparison to eq. <ref> we find
χ_hf=-((J_m⃗)^-1·J_b⃗)
which we refer to as the complete analytic solution of the ferromagnetic
high-frequency susceptibility. Note that this approach is independent
of the form of the free energy functional. Since we obtain the full
tensor without assumptions regarding its entries, we have to project
it on the unit vectors u⃗_b and u⃗_m that represent
an excitation-measurement-pair of observables to obtain a representative spectrum.
In a typical numerical evaluation one would assume u⃗_b to
be parallel to the unit-vector in ϕ direction of the applied
field B⃗ and u⃗_m to be parallel to the unit-vector
in ϕ direction of the magnetization vector in spherical coordinates.
Nonparallel unit-vectors u⃗_b and u⃗_m can be
used to account for nonuniform microwave fields. The angle between u⃗_b
and u⃗_m represents an effective phase shift Δ between
the excitation and the response. This is illustrated in the inset
in fig. <ref>. Such a phase shift can be created
for instance by having the sample covered by a conductive layer
in which the microwave creates an eddy current that in turn creates
a phase shifted microwave signal that superimposes with the original
one as described in <cit.>. The approach presented here
was used in <cit.> to calculate asymmetric line
shapes.
§.§ Extension to reciprocal-(k⃗-)space and description of the magnon dispersion
The model presented above can be extended to reciprocal space in order
to obtain the magnon dispersion. Accordingly, the spatial contributions
to the energy landscape are included in the energy density
formulation. Also the Ansatz has to be changed such that the dynamic
magnetization has a spatial dependence. We imagine that the spatial
distribution of the magnetization can be described as a constant part
and a dynamic part where the dynamic part is a Fourier series. In
contrast to the description by Suhl, where this Ansatz appears <cit.>
we consider the amplitude for every k⃗ to be small, such that
we can perturb the system with a single k⃗ at a time, yielding
an Ansatz of the form
M⃗(t,x)M⃗(M,θ_M,φ_M)+m⃗_⃗k⃗exp(ω t-k⃗·x⃗)
where k⃗ is the reciprocal vector for which the susceptibility
is being calculated and x⃗ is the spatial coordinate at which
the wave is observed.
For example we can consider exchange energy contribution in a continuum
model
F_ex=d^2B_ex/‖M⃗‖(M⃗(t,x)·M⃗(t,x))
and a k⃗ dependent dipolar coupling
to include dynamic aspects of dipolar interactions
F_Demag=1/2μ_0‖M⃗‖m⃗_k·k⃗/‖m⃗‖ ^2‖k⃗‖ ^2k⃗·M⃗(t,x)
where d is the distance between two neighboring spins, B_ex
is the exchange field they exert on each other and is
the Laplace operator in real space.
Adding this contribution to the Helmholtz energy density we can proceed
as before and calculate the susceptibility for every k⃗ in
the Brillouin zone as shown exemplary in fig. <ref>. The results agree with the literature (see for instance <cit.>).
For other spatial contributions such as anisotropic exchange and chiral
coupling , the model can be applied in the same way.
§.§ The equilibrium position of the magnetization
In order to use the result in eq. <ref> to obtain
the susceptibility it is necessary for M⃗(θ_M,ϕ_M)
to locally minimize the free energy density. The orientation of the
magnetization vector has to be determined form the shape of the free
energy landscape including an applied magnetic field. In the following
we present our recursive method to efficiently find these minima.
In terms of infinitesimals this method can be viewed
as a trajectory depended analytic solution. Due to its infinite recursion
along a chosen trajectory however it resembles a second order newton
algorithm, which is a numerical tool, and we therefore tend to call
it a semi-analytic trajectory dependent solution of the equilibrium
states of the magnetization.
For certain paths in the applied field space, where the trajectory passes sufficiently far beyond a hard direction, the equilibrium angles
are discontinuous if the Zeeman energy does not overcome the anisotropic
contributions of this hard direction. This can lead to a hysteretic
behavior of the magnetization depending on the trajectory of B⃗(τ):=B⃗(B(τ),θ_B(τ),φ_B(τ)).
To account for this behavior a solution representing the equilibrium
angles must depend on the trajectory B⃗(τ) and
not only on a momentary configuration of B⃗. Without loss
of generality we will only consider the equilibrium angles {θ_M,φ_M}
of the magnetization in spherical coordinates to minimize the free
energy, since in many applications the norm of the magnetization may
be considered constant. Once a minimizer Ω⃗(B⃗(0))={θ_M,φ_M} _0
of the free energy F(B⃗,M⃗) is known for a
certain starting configuration B⃗(0), a small change
in B⃗→B⃗(0+δ) that yields a
small change in the position of the minimum of F(B⃗,M⃗)
can be accounted for by calculating a series expansion of F(B⃗(0+δ),M⃗)
at the position {θ_M,φ_M} _0 to
the second order.
The position of the minimum of this parabola will be close to the
minimum {θ_M,φ_M} _0+δ of F(B⃗(0+δ),M⃗).
In fact as δ decreases the solution obtained this way will
get closer to the exact minimum. This procedure is illustrated in
fig. <ref>, where the free energy was defined to be F=sin^2(2ϕ_M)-5cos(ϕ_B-ϕ_M).
Since the function obtained from the series expansion is of quadratic
order it can always be written in a form such that the vertex can
be directly extracted from the function. Therefore a recursive function
of the form <ref>
Ω⃗(B⃗(0-δ))=Ω⃗(B⃗(0))-
.H_F^-1|_Ω⃗(B⃗(0-δ))·.∇⃗F|_Ω⃗(B⃗(0-δ))
Ω⃗(B⃗(0-2δ))=Ω⃗(B⃗(0-δ))-
.H_F^-1|_Ω⃗(B⃗(0-2δ))·.∇⃗F|_Ω⃗(B⃗(0-2δ))
...
can be derived to describe the position of a minimum for certain
trajectories B⃗(τ), where H_F
is the Hessian Matrix of the free energy density that described the
curvature and ∇⃗F the gradient that describes the slope
of the free energy. Conceptually this can be considered a second order
Newton algorithm with the exception that it starts from a known position
making the number of iterations required tend towards 1 as δ
gets small. To determine a minimizer that can be used as a starting
point in eq. <ref> the easiest approach in a numerical
calculation is to start at a field value sufficiently higher than
the field at which the Zeeman energy fully overcomes the anisotropy
energy – in the sense that there is only one minimum and
one maximum left in the energy landscape – and to assume
that the magnetization is parallel to the applied field in this configuration.
This approach was implemented and found to be very accurate in <cit.>
for fitting FMR spectra recorded at different microwave frequencies.
Figure <ref> shows some calculated spectra
using the solution presented above, with the corresponding equilibrium
angles calculated with this trajectory dependent algorithm. The overall
calculation time was about five minutes for 540180 data points.
Equation <ref> in combination with eq. <ref>
describe a very fast algorithm to calculate the complete susceptibility
for any given free energy density and any measurement trajectory.
This algorithm however will not always align the magnetization in
the absolute minimum of the free energy, in fact it will fall into
meta-stable states if for instance a fourfold crystalline anisotropy
is considered and the applied field is swept along the field angle
rather than the field amplitude, predicting the occurrence of ferromagnetic
resonance in meta-stable states.
§ SUMMARY
We have devised a versatile analytic model, capable of accurately
describing FMR experiments as well as modeling the full magnon dispersion.
The model is simple in that it requires only derivatives. Condensed
into a single operator χ_hf,
it is compact and thus easy to use in analytic and numeric applications.
The formulation through an energy density allows for easy modification
of the model to adapt different types of interactions, such as dipole-dipole-interaction,
spin-spin-interactions like the Dzyaloshinskiǐ-Moriya interaction
and spin-orbit interactions. It can also be applied directly to spatial
dependent spin configurations obtained from micromagnetic simulations
to retrieve information about the magnetodynamic properties of spin
textures. The model is not restricted to evaluating the magnon dispersion
as a function ω(k) but instead yields the magnonic
response amplitude χ(ω,k) as a Green's function.
In addition to this, the algorithm described in sec. <ref>
makes it possible to apply the model on orientations of the magnetization
which are non collinear with the symmetry directions of the system
or the applied magnetic field. This can be used to calculate angular
dependent spectra, as well as identify meta-stable states and describe
their magnetodynamic behavior.
|
http://arxiv.org/abs/1701.07694v2 | 20170126133508 | Optimization of Wireless Power Transfer Systems Enhanced by Passive Elements and Metasurfaces | [
"Hans-Dieter Lang",
"Costas D. Sarris"
] | math.OC | [
"math.OC",
"cs.SY"
] |
[pages=1-last]paper_passive_TAP.pdf
|
http://arxiv.org/abs/1701.08144v2 | 20170127183929 | Nonfillable Legendrian knots in the 3-sphere | [
"Tolga Etgü"
] | math.SG | [
"math.SG",
"math.GT"
] |
LOW-MASS WHITE DWARFS WITH HYDROGEN ENVELOPES
AS A MISSING LINK IN THE TIDAL DISRUPTION MENU
Enrico Ramirez-Ruiz
December 30, 2023
=============================================================================================
Let Λ be a Legendrian knot in the standard contact 3-sphere.
If Λ bounds an orientable exact Lagrangian surface Σ in the standard symplectic 4-ball, then the genus of S is equal to the slice genus of (the smooth knot underlying) Λ, the rotation number of Λ is zero as well as the sum of the Thurston-Bennequin number of Λ and the Euler characteristic of Σ, and moreover the linearized contact homology of Λ with respect to the augmentation induced by Σ is isomorphic to the (singular) homology of Σ.
It was asked in <cit.> whether the converse of this statement holds.
We give a negative answer to this question providing a family of Legendrian knots with augmentations which are not induced by exact Lagrangian fillings although the associated linearized contact homology is isomorphic to the homology of the smooth surface of minimal genus in the 4-ball bounding the knot.
§ INTRODUCTION
Let Λ be a Legendrian knot in the standard contact 3-sphere.
If Λ bounds an exact orientable Lagrangian surface Σ in the standard symplectic 4-ball, then the genus of Σ is equal to the slice genus of (the smooth knot underlying) Λ, the rotation number of Λ is zero as well as the sum of the Thurston-Bennequin number of Λ and the Euler characteristic of Σ by a theorem of Chantraine <cit.>. Moreover the linearized contact homology of Λ with respect to the augmentation induced by Σ is isomorphic to the (singular) homology of Σ by a theorem of Seidel <cit.>.
Question (8.9) in <cit.> asks whether every augmentation for which Seidel's isomorphism holds is induced by a Lagrangian filling.
We give a negative answer to this question based on
the family of Legendrian knots
{Λ_p,q,r,s : p,q,r,s ≥ 2, p ≡ q ≡ r+1 ≡ s+1 2 }
given by the Lagrangian projection in Fig. (<ref>).
Throughout the paper we will always be working under the above assumption that the parities of p and q match and they are the opposite of those of r and s.
The rotation and Thurston-Bennequin numbers of Λ_p,q,r,s are 0 and 5, respectively.
This gives a lower bound of 3 on the slice genus.
On the other hand, the Seifert surface we obtain from this projection has genus 3 hence both the slice genus and the Seifert genus are 3.
The Chekanov-Eliashberg dg-algebra of Λ_p,p,p+1,p+1 admits an augmentation which is not induced by any exact orientable Lagrangian filling, although the corresponding linearized contact homology is isomorphic to the homology of a surface of genus 3 with one boundary component.
Our examples are inspired by a deformation argument in <cit.> related to the Chekanov-Eliashberg algebra where the Legendrian link of unknots linked according to the D_4 tree is shown to be significant, specifically over a base field of characteristic 2 (see the last part of the proof of Thm.(14) in <cit.> and Rem. (<ref>) below).
As can be seen in Fig. (<ref>), Λ_p,q,r,s is constructed from the D_4 link by adding twists that turn it into a Legendrian knot.
A similar construction was previously used in <cit.> on a different link to produce infinite families of Legendrian knots not isotopic to their Legendrian mirrors.
In the next section we describe the Chekanov-Eliashberg algebra and compute the linearized contact homology of our examples.
In Sec. (<ref>) we gather enough information on the strictly unital A_∞-algebras obtained by dualizing the Chekanov-Eliashberg algebras of our Legendrian knots to prove that they are not quasi-isomorphic to the A_∞-algebra of cochains on a closed surface whenever the base field has characteristic 2. This proves the nonfillability of our examples by a duality result of Ekholm and Lekili from their recent preprint <cit.> (see Thm. (<ref>) below).
Acknowledgments. We would like to thank Yankı Lekili and Joshua Sabloff for discussions on the subject and comments on a draft of this paper. It is a pleasure to thank Princeton University for the hospitality. This research is partially supported by the Technological and Research Council of Turkey through a BIDEB-2219 fellowship.
§ LINEARIZED CONTACT HOMOLOGY OF Λ_P,Q,R,S
The Chekanov-Eliashberg algebra of a Legendrian knot is a differential graded algebra generated by Reeb chords from the knot to itself.
We refer to <cit.> for the combinatorial description of the Chekanov-Eliashberg algebra of a Legendrian knot in the standard contact 3-sphere.
We denote the Chekanov-Eliashberg algebra of Λ= Λ_p,q,r,s over a field by (L, )̣.
Since the rotation number of Λ is 0, we have a -grading on (L, )̣.
The generators of (L, )̣ are as indicated in Fig. (<ref>):
a^x_0, … , a_p^x , a^y_0, … , a_q^y, a^z_0, … , a_r^z, a^w_0, … , a_s^w
x_0 , … , x_p, y_0 , … , y_q, z_0 , … , z_r, w_0 , … , w_s
a_0 , b_1, … , b_6
with gradings
|*_i|=0, |a_0| =|a^*_i|=1 , * ∈{ x,y,z,w }
|b_1|=-|b_4| = p-r+1, |b_2|=-|b_5| = q-r+1, |b_3|=-|b_6| = r-s
Let : L → be the augmentation which maps all *_i to -1 for * ∈{ x,y,z,w }.
There is an isomorphism
HC_*^ (Λ_p,p,p+1,p+1) ≅ H_1-* (Σ ; )
between the linearized contact homology of Λ_p,p,p+1,p+1 with respect to the augmentation and the homology of the orientable surface Σ of genus 3 with one boundary component.
Counting the relevant immersed polygons with the choice of a base-point on Λ_p,q,r,s as indicated by ∙ in Fig. (<ref>) we see that nontrivial differentials are given by
ạ_0 = 1 - w_sz_ry_qx_p
ạ^x_0 = 1 + x_0 + b_1b_4
ạ^y_0 = 1 + y_0 + b_2b_5
ạ^z_0 = 1 + z_0 + b_4b_1+ b_5b_2 + z_0 b_6b_3 + b_4b_1 b_5b_2
ạ^w_0 = 1 + w_0 + b_3b_6
ạ_i^* = 1 - *_i-1*_i
* ∈{ x,y,z,w } i ≥ 1
Conjugating $̣ by the automorphismid+gives another differential^̣onL^̣ a_0 = w_s + z_r + y_q+ x_p - w_sz_r- w_sy_q - w_sx_p - z_ry_q - z_sx_p - y_qx_p
+ w_sz_ry_q+ w_sz_rx_p+w_sy_qx_p+z_ry_qx_p- w_sz_ry_qx_p
^̣ a^x_0 = x_0 + b_1b_4 , ^̣ a^y_0 = y_0 + b_2b_5 , ^̣ a^w_0 = w_0 + b_3b_6
^̣ a^z_0 =z_0 + b_4b_1+ b_5b_2 - b_6b_3 + z_0 b_6b_3 + b_4b_1 b_5b_2
^̣ a_i^* = *_i-1+*_i-*_i-1*_i , * ∈{ x,y,z,w } i ≥ 1
Applying the elementary transformationa_0 ↦ a_0 -(-1)^p(∑_i=0^p (-1)^i a_i^x + ∑_i=0^q (-1)^i a_i^y
- ∑_i=0^r (-1)^i a_i^z- ∑_i=0^s (-1)^i a_i^w)simplifies the computation of linearized contact homologyHC_*^(Λ)ofΛassociated to the augmentationand more importantly, the description of theA_∞-algebras that will be discussed in the next section.
At this point, we have the following presentation of the differential_̣1^on the linearized complex which computesHC_*^(Λ_p,q,r,s):
_̣1^ a_0 = _̣1^ b_j = 0
_̣1^ a^x_0 = x_0 , _̣1^ a^y_0 = y_0, _̣1^ a^z_0 =z_0, _̣1^ a^w_0 = w_0
_̣1^ a_i^* = *_i-1+*_i , * ∈{ x,y,z,w } i ≥ 1
It is clear thatHC_*^(Λ_p,q,r,s)is spanned bya_0 , b_1, …, b_6.
Moreover, ifp=q=r-1=s-1, then|b_i|=0for alliand we get the graded isomorphism in the statement.
§ THE AUGMENTATION IS NOT INDUCED BY A LAGRANGIAN FILLING
In this section, we prove thatΛ_p,q,r,shas no exact Lagrangian filling associated to the augmentationby using a result from a recent preprint of Ekholm and Lekili.
The following is a consequence of <cit.>.
If Λ has an exact Lagrangian filling Σ, then there is an A_∞ quasi-isomorphism between RHom_CE^*(, ) and the A_∞-algebra C^*(S;) of (singular) cochains on the closed surface S obtained by capping the boundary of Σ, where CE^*=L_-* and is equipped with a CE-module structure by the augmentation _Σ : CE → induced by the filling Σ.
In order to describe theA_∞-algebraRHom_CE^*(, )in the above statement, we utilize the isomorphism betweenRHom_CE^*(, )and the linear dualof the LegendrianA_∞-coalgebra defined in the more general setting of <cit.>.
In the current setup, the strictly unitalA_∞-algebracan be obtained from the non-unitalA_∞-algebra on the linearized cochain complex defined in <cit.> (which is also the endomorphism algebra ofin theAug_-category of <cit.>) by adding a copy ofto make it unital (cf. <cit.>).
The description of≅RHom_CE^*(, )we provide is based on the presentation of(L, ^̣)obtained at the end of the proof of Prop. (<ref>).
We abuse the notation and denote the duals of the generators ofLby the generators themselves.
The nontrivialA_∞-products on(besides those dictated by strict unitality) areμ_^1 (*_i) =a_i^* + a_i+1^* , * ∈{ x,y,z,w } i ≥ 0μ_^1 (x_p) =a_p^x , μ_^1 (y_q) =a_q^y , μ_^1 (z_r) = a_r^z , μ_^1 (w_s) =a_s^wμ_^2 (b_1,b_4) = (-1)^p+1a_0+a^x_0 , μ_^2 (b_2,b_5) =(-1)^p+1 a_0+a^y_0 , μ_^2 (b_3,b_6) = (-1)^pa_0+a^w_0μ_^2 (b_4,b_1) = μ_^2 (b_5,b_2) = (-1)^pa_0+a^z_0 , μ_^2 (b_6,b_3) =(-1)^p+1a_0-a^z_0μ_^2 (x_i-1, x_i) = (-1)^p+ia_0 - a_i^x , μ_^2 (y_i-1, y_i) = (-1)^p+ia_0 - a_i^yμ_^2 (z_i-1, z_i) = (-1)^p+i+1a_0 - a_i^z, μ_^2 (w_i-1, w_i) = (-1)^p+i+1a_0 - a_i^wμ_^2 (w_s,z_r) = μ_^2 (w_s,y_q) = μ_^2 (w_s,x_p) = μ_^2 (z_r,y_q) = μ_^2 (z_r,x_p) =μ_^2 (y_q,x_p) = - a_0μ_^3 (z_0 , b_6,b_3) = a_0+a^z_0μ_^3 (w_s,z_r,y_q) = μ_^3 (w_s,z_r,x_p) = μ_^3 (w_s,y_q,x_p) = μ_^3 (z_r,y_q,x_p) = a_0μ_^4 (b_4,b_1,b_5,b_2) = a_0+a^z_0 , μ_^4 (w_s,z_r,y_q,x_p) = - a_0and the gradings inare|a_0| =|a^*_i|=2 , |*_i|=1 * ∈{ x,y,z,w } i ≥ 0|b_1|=-|b_4| = p-r+2, |b_2|=-|b_5| = q-r+2, |b_3|=-|b_6| = r-s+1A homological perturbation argument, as suggested by Prop. (1.12) and Rem. (1.13) in <cit.>, provides a minimal model(, μ^∙_)quasi-isomorphic to(, μ^∙_).
From the above description of theA_∞-products we see the decomposition=⊕, whereis generated by{ 1, a_0 , b_i : i=1,… , 6}and gives a minimal model for, whereasis the subalgebra generated by the rest of the generators ofand acyclic with respect to the differentialμ^1.
We choose a contracting homotopyT^1: →withμ^1_T^1 + T^1 μ^1_= F^1G^1 - idas followsT^1 (a_i^x) = -x_i +x_i+1+ ⋯ +(-1)^p-i+1 x_p , T^1 (a_i^y) =- y_i +y_i+1+ ⋯ + (-1)^q-i+1y_q
T^1 (a_i^z) = -z_i +z_i+1+ ⋯ +(-1)^r-i+1 z_r , T^1 (a_i^w) = - w_i +w_i+1+ ⋯ + (-1)^s-i+1w_sThis homotopyT^1and Eqn. (1.18) in <cit.> suffices to compute theA_∞-productsμ^∙_on.
To begin with, the only nontrivialμ^2_products (besides those dictated by strict unitality) areμ_^2(b_1,b_4)=μ_^2(b_2,b_5)=μ_^2(b_6,b_3)=(-1)^p+1a_0μ_^2(b_4,b_1)=μ_^2(b_5,b_2) = μ_^2(b_3,b_6)=(-1)^pa_0Moreover,μ^3_vanishes since
* μ^3_ is trivial on ⊗⊗, and
* μ^2_ vanishes on im(F^2) ⊗⊂ im(T^1) ⊗⊂⊗ and on ⊗ im(F^2) ⊂⊗
whereF^2 = T^1 ∘μ^2_.
Proceeding further, Eqn. (1.18) in <cit.> ford=4simplifies asμ^4_ (α_4, … , α_1) = G^1 (μ^4_ (α_4, … , α_1)+ μ^3_(F^2 (α_4, α_3) , α_2 , α_1)+μ^2_(F^2 (α_4, a_3) , F^2 (α_2 , α_1)))whereG^1 : →is the projection.
At this point, one immediately sees that all three summands on the right hand side of the above formula forμ^4_vanish on quadruples which are not of the form(b_i,b_i±3,b_j,b_j± 3).
In order to prove the key proposition below, it suffices to computeμ^4_(b_i,b_j,b_k,b_l), where(b_i,b_j,b_k,b_l)is a cyclic permutation of(b_1,b_2,b_5,b_4),(b_1,b_5,b_2,b_4), or(b_2,b_5,b_2,b_5).
To this end, it is straightforward, if tedious, to check that, depending on the parity ofp, the following are the only nontrivial ones among these tenμ^4products:μ^4_(b_4,b_1,b_2,b_5)=a_0, μ^4_(b_5,b_2,b_4,b_1)=μ^4_(b_5,b_2,b_5,b_2)=-a_0if pis even, andμ^4_(b_4,b_1,b_2,b_5)=μ^4_(b_4,b_1, b_5, b_2)=a_0, μ^4_(b_2,b_5,b_2, b_5)=-a_0ifpis odd.
Besides the computations above, another important ingredient for the proof of the proposition below is the following formality statement.
Over a field of characteristic 2, the algebra of cochains C^*(S;) on a closed orientable surface S is a formal differential graded algebra.
Over the base field=, there is the classical (and much more general) formality result in <cit.>.
Since we were not able to locate an extension of this result to nonzero characteristic cases in the literature, we provide a proof of Lem. (<ref>) at the end of this section.
In fact, the characteristic condition in the statement of Lem. (<ref>) can be removed by a straightforward modification of the proof.
If the characteristic of the base field is 2, then the A_∞-algebra is not A_∞ quasi-isomorphic to the algebra of cochains on a closed orientable surface S.
By Lem. (<ref>), it suffices to prove that there is no A_∞ quasi-isomorphism between and H^*(S;).
Suppose that is an A_∞-algebra homomorphism from to H^*(S; ).
Since is a minimal A_∞-algebra, μ^1_ is trivial.
We have also established above that μ^3_ vanishes as well.
As a consequence, a particular set of A_∞-functor equations, satisfied by the family of graded multilinear maps ^d : ^⊗ d→ H^*(S;)[1-d], simplifies to
^1 ( μ^4_ (b_i, b_j, b_k,b_l) )
= ^3 (b_i, b_j, b_k) ∪^1(b_l) + ^1(b_i) ∪^3 ( b_j, b_k, b_l) + ^2(b_i, b_j) ∪^2 ( b_k, b_l)
+ ^3 (μ_^2 (b_i, b_j) , b_k , b_l) + ^3(b_i, μ_^2 (b_j , b_k ), b_l)
+ ^3(b_i, b_j , μ_^2 (b_k , b_l))
In the rest of the proof we refer to the above equation as Eq_(i,j,k,l)
and consider the sum of all the equations Eq_(i,j,k,l) where (i,j,k,l) is a cyclic permutation of (1,2,5,4), (1,5,2,4) or (2,5,2,5).
First of all, the computation preceding this proposition implies that the sum of the left hand side of these ten equations is equal to (-1)^p+1^1(a_0).
In contrast, the right hand side of the sum of these ten equations is 0.
Once we establish this, we get ^1(a_0)=0 and hence is not a quasi-isomorphism.
To prove the vanishing of the right hand side we consider the terms on the right hand side in three separate groups and argue that each group adds up to 0 under the assumption that char()=2. First observe that, since the cup product ∪ is (graded-)commutative, each of the first two terms on the right hand side of the equation Eq_(i,j,k,l) appears in exactly one other equation, namely Eq_(l,i,j,k) or Eq_(j,k,l,i) .
For the same reason, the third term on the right hand side of Eq_(i,j,k,l) is cancelled by that of Eq_(k,l, i,j), unless of course (i,j)=(k,l).
This leaves us with the sum of the third terms of Eq_(2,5, 2,5) and Eq_(5, 2,5,2),
^2(b_2, b_5) ∪^2 ( b_2, b_5) + ^2(b_5, b_2) ∪^2 ( b_5, b_2)
which vanish by the general properties of the cup product.
Finally, remember that we always have μ^2_ (b_i , b_j)=0 for |i-j|≠ 3 and, if char()=2,
μ^2_ (b_4 , b_1) = μ^2_ (b_2 , b_5) = μ^2_ (b_5 , b_2)=a_0 .
This suffices to conclude that each of the last three terms on the right hand side of any one of the equations is either 0 or it appears in exactly two of our equations, e.g. the fifth term in Eq_(2,4,1,5) is equal to the fifth term in Eq_(2,5,2,5) .
The Legendrian knot Λ_p,q,r,s admits an augmentation which is not induced by an exact orientable Lagrangian filling.
When char ()≠2 our proof of Prop. (<ref>) breaks down because the right hand side of the sum of the ten equations we consider in the last step of the proof is equal to
2 (-1)^p ( ^3 (a_0 , b_2 , b_5) + ^3(b_2, a_0, b_5) + ^3(b_2, b_5 , a_0))
which is not necessarily 0 in general.
(of Lem. (<ref>))
We prove the formality of the differential graded algebra C=C^*(S, ) of (simplicial) cochains with the cup product on the closed surface S associated to the triangulation given in Fig. (<ref>) by providing a zig-zag of explicit dg-algebra quasi-isomorphisms connecting C and the cohomology algebra H=H^*(S, ) of S.
We denote the generators of C by
e_i, α_j, β_j, θ_k, γ_k 1≤ j ≤ g , 1≤ k ≤ 4g
which represent the duals of the simplicies
e^0_i, a_j, b_j , e^1_k, e^2_k
as indicated in Fig. (<ref>).
The nontrivial differentials and products can be read from the triangulation as
ẹ_1 = ẹ_2 = θ_1 + ⋯ + θ_4gα̣_j = γ_4j-3+γ_4j-1 , β̣_j = γ_4j-2+γ_4j , θ̣_k = γ_k-1 +γ_k
and
e_ie_i=e_i, e_1θ_k = θ_k , e_1γ_k = γ_k, θ_k e_2 = θ_k , γ_k e_2 = γ_k e_2 α_j = α_j e_2 = α_j , e_2 β_j = β_j e_2 = β_jθ_4j-3α_j=γ_4j-3 , θ_4j-2β_j=γ_4j-2θ_4jα_j=γ_4j-1 , θ_4j+1β_j=γ_4j
(In the above equations and the rest of the proof, indices should always be interpreted modulo 4g.)
We now define another dg-algebra, quasi-isomorphic to C and with a simplified differential so that the rest of the proof is more transparent.
This new dg-algebra C' is generated by
e, φ_j, ψ_j, ν, _1, ζ_1, ξ_l, ν_l 1≤ j ≤ g , 1 ≤ l ≤ 4g-1
so that the map Φ : C' → C
defined by
Φ: e ↦ e_1 + e_2 , _1 ↦ e_1, ζ_1 ↦θ_1 + ⋯θ_4g
φ_j ↦α_j + θ_4j-2 + θ_4j-1, ψ_j ↦β_j + θ_4j-1 +θ_4j ,
ξ_l ↦θ_l , ν_l ↦γ_l-1 + γ_l, ν↦γ_4g
is a dg-algebra quasi-isomorphism.
More precisely, on C', the nontrivial differentials are
'̣_1 = ζ_1, '̣ξ_l = ν_l e is the identity element, and the remaining products are
φ_j ψ_j = ν + ν_1 + ⋯ + ν_4j-2 , ψ_j φ_j = ν + ν_1 + ⋯ + ν_4j-1
ξ_4j-3φ_j=ν + ν_1 + ⋯ + ν_4j-3 , ξ_4j-2ψ_j=ν + ν_1 + ⋯ + ν_4j-2
j<g, ξ_4jφ_j=ν + ν_1 + ⋯ + ν_4j-1, ξ_4j+1ψ_j=ν + ν_1 + ⋯ + ν_4j , ξ_1ψ_g=ν
_1_1=_1, _1ξ_l = ξ_l , _1φ_j = ξ_4j-2 +ξ_4j-1
j<g, _1ψ_j = ξ_4j-1 +ξ_4j, _1ψ_g= ζ_1 + ξ_1 + ⋯ξ_4g-2, _1ν = ν
_1ζ_1 = ζ_1, _1ν_l = ν_l, ζ_1φ_j= ν_4j-2 + ν_4j-1
j<g, ζ_1ψ_j=ν_4j-1+ν_4j , ζ_1ψ_g= ν_1 + ⋯ + ν_4g-2
In the next step, we define yet another dg-algebra C by stabilizing C', i.e. C contains C' as a subalgebra and the inclusion map is a dg-algebra quasi-isomorphism.
Namely, we add the generators
_k, ζ_k 2≤ k ≤ 2g+1
with |_k|=0 , |ζ_k|=1 _k = ζ_k , ζ_k = 0
to those of C'.
and extend the algebra structure to C by adding the following nontrivial products
_2jψ_j = ξ_1 + ⋯ + ξ_4j-2, _2j+1φ_j = ξ_1 + ⋯ + ξ_4j-1
ζ_2jψ_j = ν_1 + ⋯ + ν_4j-2, ζ_2j+1φ_j = ν_1 + ⋯ + ν_4j-1
for j=1, … , g.
Finally, it is clear that the map Φ : H →C defined on the cohomology algebra H=H^*(S; ) by
Φ : e ↦ e , φ_j ↦φ_j + ζ_2j , ψ_j ↦ψ_j + ζ_2j+1 , ν↦ν
is a dg-algebra quasi-isomorphism
proving the formality of C=C^*(S; ).
9999BC F. Bourgeois, B. Chantraine, Bilinearized Legendrian contact homology and the augmentation category. J. Symplectic Geom. 12 (2014) 553–583.
cha B. Chantraine, On Lagrangian concordance of Legendrian knots. Algebr. Geom. Topol. 10 (2010) 63–85.
chekanov Y. Chekanov, Differential algebra of Legendrian links. Invent. Math. 150 (2002) 441–483.
CEKSW G. Civan, P. Koprowski, J. Etnyre, J. Sabloff, A. Walker, Product structures for Legendrian contact homology. Math. Proc. Cambridge Philos. Soc. 150 (2011) 291–311.
DGMS P. Deligne, P. Griffiths, J. Morgan, D. Sullivan, Real homotopy theory of Kähler manifolds. Invent. Math. 29 (1975) 245–274.
D-R G. Dimitroglou Rizell, Lifting pseudo-holomorphic polygons to the symplectisation of P× and applications.
Quantum Topol. 7 (2016) 29–105.
E T. Ekholm, Rational SFT, linearized Legendrian contact homology, and Lagrangian Floer cohomology. Perspectives in analysis, geometry, and topology, 109–145,
Progr. Math., 296, Birkhäuser/Springer, New York, 2012.
EHK T. Ekholm, K. Honda, T. Kálmán, Legendrian knots and exact Lagrangian cobordisms. J. Eur. Math. Soc. 18 (2016) 2627–2689.
EkL T. Ekholm, Y. Lekili, Duality between Lagrangian and Legendrian invariants. arXiv:1701.01284
EL T. Etgü, Y. Lekili, Koszul duality patterns in Floer theory. Geom. Topol. (to appear), arXiv:1502.07922
Seidel P. Seidel, Fukaya categories and Picard-Lefschetz theory, Zurich Lectures in Advanced Mathematics, European Mathematical Society (EMS), Zürich, 2008.
|
http://arxiv.org/abs/1701.07626v2 | 20170126093403 | Isochronous Dynamics in Pulse Coupled Oscillator Networks with Delay | [
"Pan Li",
"Wei Lin",
"Konstantinos Efstathiou"
] | math.DS | [
"math.DS"
] |
P.Li@rug.nl
School of Mathematical Sciences, Centre for Computational Systems Biology of ISTBI, Fudan University, Shanghai 200433, China
Johann Bernoulli Institute for Mathematics and Computer Science,
University of Groningen, P.O. Box 407, 9700 AK, Groningen, The Netherlands
wlin@fudan.edu.cn
School of Mathematical Sciences, Centre for Computational Systems Biology of ISTBI, Fudan University, Shanghai 200433, China
Shanghai Key Laboratory of Contemporary Applied Mathematics, and LMNS, Ministry of Education, China
K.Efstathiou@rug.nl
Johann Bernoulli Institute for Mathematics and Computer Science,
University of Groningen, P.O. Box 407, 9700 AK, Groningen, The Netherlands
We consider a network of identical pulse-coupled oscillators with delay and all-to-all coupling. We demonstrate that the discontinuous nature of the dynamics induces the appearance of isochronous regions—subsets of the phase space filled with periodic orbits having the same period. For fixed values of the network parameters each such isochronous region corresponds to a subset of initial states on an appropriate surface of section with non-zero dimension such that all periodic orbits in this set have qualitatively similar dynamical behaviour. We analytically and numerically study in detail such an isochronous region, give a proof of its existence, and describe its properties. We further describe other isochronous regions that appear in the system.
Isochronous Dynamics in Pulse Coupled Oscillator Networks with Delay
Konstantinos Efstathiou
December 30, 2023
====================================================================
Pulse coupled oscillator networks are a key model for the study of synchronization in a wide variety of systems, ranging from fireflies to wireless communication systems. Moreover, despite their simplicity, they manifest dynamical behavior that does not typically appear in smooth finite-dimensional dynamical systems. We report on the existence of isochronous dynamics in pulse coupled oscillator networks with delay: for suitable values of the parameters there exist open sets of initial conditions giving periodic orbits with the same period. This, previously unknown, behavior of pulse coupled oscillator networks with delay provides a deeper understanding of their dynamics and how they can reach synchronization.
§ INTRODUCTION
Pulse coupled oscillator networks Pulse coupled oscillator networks (PCONs) have been used to model interactions in networks where each node affects other nodes in a discontinuous way. Two such examples are the synchronization related to the function of the heart <cit.> and the synchronization of fireflies <cit.>. There is now an extensive literature on the dynamics of pulse coupled oscillator networks focusing on synchronization and the stability of synchronized states.
Concerning syncronization, after the seminal work <cit.> who considered excitatory coupling with no delay, <cit.> showed the importance of delayed and inhibitory coupling for complete synchronization, while excitatory coupling leads to synchronization with a phase lag. In particular, for inhibitory coupling it was shown that the network syncronizes in multistable clusters of common phase. <cit.> showed that all-to-all networks with delayed excitatory coupling do not synchronize, either completely or in a weak sense, for sufficiently small delay and coupling strength. In <cit.> it was shown that the parameter space in systems with excitatory coupling is separated into two regions that support different types of dynamics. The effect of network connectivity to synchronization is numerically studied in <cit.> where it is shown that the proportion of initial conditions that lead to synchronization is an increasing function of the node-degree. <cit.> showed that pulses induce the breakdown of order preservation, and demonstrated a system of 2 identical and symmetrically coupled oscillators where the winding numbers of the two oscillators can be different. <cit.> showed that under self-adjustment assumptions, systems with heterogeneous phases rates and random individual delays would converge to a close-to-synchrony state. Moreover, synchronization has been considered in systems with stochastic features. <cit.> studied how small clusters of synchronized oscillators in all-to-all networks coalesce to form larger clusters and obtained exact results for the time-dependent distribution of cluster sizes.
Except for synchronized states more interesting dynamics also manifests in pulse coupled oscillator networks. The existence of unstable attractors has been established, numerically and analytically, in all-to-all pulse coupled oscillator networks with delay, see <cit.>. Unstable attractors are fixed points or periodic orbits, which are locally unstable, but have a basin of attraction which is an open subset of the state space. Heteroclinic connections between saddle periodic orbits, such as unstable attractors, have been shown to exist in pulse coupled oscillator networks with delay <cit.> and they have been proposed as representations of solutions of computational tasks. <cit.> showed that complex networks of dynamically connected saddle states are capable of computing arbitrary logic operations by entering into switching sequences in a controlled way. <cit.> gave an analysis of asymptotic stability for topologically strongly connected PCONs, while <cit.> analyzed the influence of asymmetric coupling and showed that it leads to a smaller bistability range of synchronized states. <cit.> numerically showed the existence of long chaotic transients in pulse-coupled oscillator networks. The length of the transients depends on the network connectivity and such transients become prevalent for large networks.
Isochronous dynamics
In this paper we report on a newly observed dynamical behavior of PCONs with delay. Specifically, we show that for appropriate values of the coupling parameters, that is, of the coupling strength ε and the delay τ, there is a n≥1-dimensional subset of state space foliated by periodic orbits having the same period. We call the subsets of state space isochronous regions. These periodic orbits are equivalent in a sense we make precise in Definition <ref>. Furthermore, the parameter region for which such periodic orbits manifest is an open subset of the parameter space.
This type of observed dynamics in PCON with delay is a special case of isochronous dynamics. One talks of isochronous dynamics when a dynamical system has an open set of initial states that give rise to periodic solutions having the same period. Examples include the one-dimensional harmonic oscillator, any N-dimensional harmonic oscillator where the frequencies, ω_1,…,ω_N, satisfy N-1 resonance relations, and the restriction of the Kepler problem to any constant energy surface. We refer to <cit.> for an extensive review of recent results pertaining to isochronous dynamics in the context of ordinary differential equations and Hamiltonian systems. Nevertheless, such isochronous dynamics have not been previously observed in PCONs, except of course for the trivial case of identical uncoupled oscillators.
A non-trivial example of isochronous dynamics induced by a non-smooth map g: [0,1] → [0,1] is depicted in <ref>. Each point in the middle (red) segment of the graph of g, lying along the diagonal, is a fixed point of g and thus such points give isochronous dynamics of period 1.
Structure of the paper In <ref> we describe the dynamics of PCONs with delay and we review its basic properties. In <ref> we first present numerical experiments that show the appearance of n≥1-dimensional sets of periodic orbits on a surface of section for specific values of the dynamical parameters. Then we define the notion of a isochronous region. In <ref> we discuss in detail one of the isochronous regions in the system. We prove its existence for an open subset of parameter values, describe in detail the dynamics in the region, and determine the stability of the periodic orbits that constitute the region. In <ref> we briefly describe other isochronous regions that appear in the system. We conclude the paper in <ref>.
§ DYNAMICS OF PCONS WITH DELAY
In this section we specify the dynamics of the PCONs with delay that we consider in this paper.
§.§ Mirollo-Strogatz model with delay
We consider a variation of the Mirollo-Strogatz model <cit.>. The system here is a homogeneous all-to-all network consisting of N pulse coupled oscillators with delayed excitatory interaction. All the oscillators follow the same integrate-and-fire dynamics. Between receiving pulses the state of each oscillator evolves autonomously and its dynamics is smooth. When the i-th oscillator reaches the threshold value x_i = 1 its state is reset to x_i=0. At the same moment the i-th oscillator sends a pulse to all other oscillators, j i, in the network. The time between the moment an oscillator sends a pulse and the moment the other oscillators receive that pulse is the delay τ≥ 0. When the i-th oscillator receives m simultaneous pulses without crossing the threshold value, its state variable jumps to x_i' = x_i + m ε̂. If x_i + m ε̂≥ 1, that is, if the oscillator crosses the firing threshold by receiving these pulses, then the new state becomes x_i' = 0 ≡ 1. The dynamics for each oscillator is thus given by
ẋ_i(t) = F(x_i(t)),
x_i(t^+) = 0, if x_i(t)=1,
and
x_i (t) = min(1, x_i (t^-) + mε̂ ),
if m other oscillators fired at time t-τ.
Here, ε̂ = ε / (N-1), where ε≥ 0 is the coupling strength, and F is a positive, decreasing, function (F > 0, F' < 0).
To simplify the description of the dynamics we define, following <cit.>, the phases (θ_i)_i=1^N instead of the state variables (x_i)_i=1^N.
The two sets of variables are related through
x_i = f(θ_i),
where f:[0,1]→[0,1] is a diffeomorphism fixing the endpoints, that is, f(0)=0 and f(1)=1. The map f is defined through the requirement that the uncoupled dynamics of each oscillator is given by θ̇_i = 1. This implies
ẋ = F(x) = f'(f^-1(x)) = 1/(f^-1)'(x),
and that f is increasing and concave down (f' > 0, f” < 0).
Following <cit.> we choose
F(x) := F_b(x) = e^b - 1/b e^-bx, b > 0,
giving
f(θ) := f_b(θ) = 1/bln( 1 + (e^b-1) θ).
Then, in terms of the phases θ_i, the dynamics is given by
θ̇_i(t) = 1,
θ_i(t^+) = 0, if θ_i(t)=1,
andθ_i(t) = min{ 1, H(θ_i(t^-),mε̂) },
if m other oscillators fired at time t-τ.
The function H is defined by
H (θ, δ)
= f^-1(f(θ)+δ)
= e^b δ θ + e^b δ-1/e^b-1,
and it gives the new phase of an oscillator with phase θ after it receives a pulse of size δ, ignoring the effect of the threshold.
Typically, one also defines the pulse response function (PRF) V(θ, δ) representing the change in phase after receiving a pulse of size δ, ignoring the effect of the threshold. Specifically,
V(θ,δ) = H(θ,δ) - θ = (e^bδ -1) θ + e^bδ-1/e^b-1,
see <ref>.
Note that the function H in Eq. (<ref>) has the property
H(H( θ, δ), δ') = H(θ, δ+δ').
implying
H(H( θ, m ε̂), m' ε̂) = H(θ, (m+m') ε̂).
To simplify notation, for fixed value of ε̂, we write
H(θ,mε̂) = H_m(θ) and H(θ,ε̂) = H_1(θ) = H(θ).
§.§ Description of the dynamics
In principle, to determine the dynamics of a system with delay τ for t ≥ 0 one should know the phases θ_i(t), i=1,…,N of the oscillators for all t ∈ [-τ,0]. This information can be encoded in the phase history function
θ: [-τ,0] →𝕋^n : t ↦ (θ_1(t), …, θ_N(t)).
In the particular system studied here, this description can be further simplified since it is not all the information about the phases in [-τ,0] that is necessary to determine the future dynamics. Instead, it is enough to know the phases θ_i(0), i=1,…,N at t=0 and the firing moments of each oscillator in [-τ,0], that is, the moments when each oscillator reaches the threshold value.
We denote by -σ_i^(j) the j-th firing moment of the i-th oscillator in [-τ,0] and by Σ_i = {σ_i^(j)} the set of all such firings moments. Note that our ordering is
⋯ < - σ_i^(3) < - σ_i^(2) < - σ_i^(1)≤ 0.
To simplify notation we also write σ_i = σ_i^(1) in the case that Σ_i contains exactly one element. We call the σ_i^(j) firing time distances (FTD) and σ_i the last firing time distance (LFTD).
It is shown in <cit.> that for sufficiently small values of ε and τ the size of the set Σ = ⋃_i=1^N Σ_i is bounded for all t ≥ 0. The parameter region of interest in the present paper is not covered by the explicit estimates given in <cit.>. Nevertheless, for the specific orbits in the isoschronous regions we consider, the size of Σ remains bounded for all t ≥ 0.
The dynamics of the system for t ≥ 0 can then be determined from the FTD in [-τ,0] and the phases at t=0, i.e., from the set
Φ = {{σ_i^(j)}_j, θ_i }_i=1,…,N.
When Φ is a finite set we can ask whether a neighborhood is a finite or infinite dimensional set. <cit.> show that, choosing an appropriate metric on the space of phase history functions, a neighborhood of Φ is finite dimensional. Nevertheless, this local dimension is not constant and is not bounded throughout the state space.
To describe high-dimensional dynamics it is convenient to introduce a Poincaré surface of section. Here we choose the surface θ_N = 0, see also <cit.>. Given a state Φ with θ_N = 0 the time evolution of the system produces a new state Φ' when θ_N becomes again 0. This defines the Poincaré map μ: Φ→Φ'. We call the sequence of points μ^j(Φ)=μ( μ^j-1(Φ) ), j=1,2,…, the Poincaré orbit with initial state μ^0(Φ)=Φ. We also define a related concept.
Consider a Poincaré orbit {μ^j(Φ) }_j=0,1,2,… and let pr_θ denote the projection
Φ = {{σ_i^(j)}_j, θ_i }_i=1,…,N↦{θ_i }_i=1,…,N-1.
Then the phase orbit of Φ is the sequence pr_θ(μ^j(Φ)), j=0,1,2,….
Note that the phase orbit gives only a projection of the dynamics to the space of N-1 phase variables (θ_1,…,θ_N-1). Since the full dynamics further depends on the firing moments in the time interval [-τ,0] we cannot define a map 𝕋^N-1→𝕋^N-1 that depends only on the phases (θ_1,…,θ_N-1) and fully encodes the dynamics.
A Poincaré orbit {μ^j(Φ) }_j=0,1,2,… for which μ^j+T_P(Φ) = μ_j(Φ) for all j ≥ 0 is called periodic with Poincaré period T_P. Note that T_P is not necessarily the minimal period. By construction, a periodic Poincaré orbit corresponds to a periodic orbit in the full state space for the dynamics with continuous time t ≥ 0. In particular, let Φ(t) be the state at time t ≥ 0 corresponding to a periodic Poincaré orbit. Then there is a time T, corresponding to T_P, such that Φ(t+T) = Φ(t) for all t ≥ 0. We call T the orbit period.
§ ISOCHRONOUS DYNAMICS
In this paper we consider a pulse coupled oscillator network with N=3 oscillators. We show that there is an open region in the paramater space (ε,τ) with families of periodic orbits exhibiting intriguing dynamical behavior. In particular, the periodic orbits are not isolated but for each (ε,τ) they fill up a n≥2-dimensional subset in state space, or equivalently, a n≥1-dimensional subset on the Poincaré surface of section.
§.§ Numerical Experiments
We first report the results of numerical experiments for a pulse coupled 3-oscillator network with delay with parameters (ε, τ). Specifically, we numerically compute the orbits of the system starting from a specific class of initial states Φ on the Poincaré surface of section θ_3 = 0. These states are defined by scanning the (θ_1,θ_2)-space 𝕋^2 and setting θ_3 = 0. As we earlier mentioned this information is not sufficient for determining the dynamics of the system and we also need to know the firing time distances. In this computation, for the oscillators 1 and 2 we set
Σ_i = {σ_i^(j)}= {θ_i }, if θ_i ≤τ
∅, if θ_i > τ.
Note that this choice of initial states does not exhaustively cover the phase space due to the restrictions imposed on the FTDs. In particular, we could have also considered initial states with more firing moments in [-τ,0] but our choice is the simplest natural choice and sufficiently reduces the computational time so as to make the computation feasible while allowing to study the system for different parameter values.
We numerically find that all such orbits are eventually periodic. There is a time T_0 such that for t ≥ T_0 it holds that Φ(t+T) = Φ(t), where T > 0 is the eventual orbit period. In other words, each initial state converges in finite time to a periodic attractor with period T.
In <ref> we show for (ε, τ) = (0.58, 0.58) the projection of the periodic attractors to the (θ_1,θ_2)-space, that is, we show the phase orbits corresponding to the periodic attractors. The figure shows the existence of periodic orbits with Poincaré periods T_P ∈{3,4,5}. Note that we did not find any attractors with Poincaré periods T_P = 2 or T_P ≥ 6 in this computation. Most importantly, we observe that for (ε, τ) = (0.58, 0.58) the attractors with Poincaré periods T_P ∈{3,4,5} are not isolated. Projections of periodic attractors with T_P=3 appear to fill one-dimensional sets in the (θ_1,θ_2)-space. Projections of periodic attractors with T_P=4 or T_P=5 appear to fill one- and two-dimensional sets. In what follows we analytically study the periodic orbits that we numerically observed. We aim to prove that their projections to the (θ_1,θ_2)-plane fill one- and two-parameter sets and to describe the appearance of these orbits and their properties.
§.§ Definitions
To give a systematic description we classify the periodic orbits into equivalence classes. First, we introduce some notation. Let O be a periodic orbit with period T > 0 and denote by P_i,j the j-th pulse received by the i-th oscillator in the time interval [0,T). Denote by n(P_i,j) the multiplicity of the pulse P_i,j, that is, how many simultaneous pulses correspond to P_i,j.
Two periodic orbits O and O' are pulse equivalent if they have the same periods T=T' > 0, the sets { P_i,j} and { P'_i,j} have the same cardinalities, and n(P_i,j) = n(P'_i,j) for all i,j.
We now define an isochronous region. For this we ask that not only the orbit periods in an isochronous region are the same but the stronger condition that the orbits are pulse equivalent.
A subset ℬ of the state space is an isochronous region of period T (or Poincaré period T_P) if
* all orbits starting in ℬ are pulse equivalent with period T (or Poincaré period T_P),
* each orbit starting in ℬ stays within ℬ, and
* there is a homeomorphism S between the space of orbits in ℬ and an open, connected, subset Ω of ℝ^k, k ≥ 1.
ℬ is required to be invariant under the ℝ_+ action induced by the dynamics. This allows to define the space of orbits ℬ / ℝ_+ obtained by reducing ℬ with respect to the ℝ_+ action. Note that the requirement that ℬ / ℝ_+ is connected does not imply that ℬ is also connected since each periodic orbit in ℬ may be disconnected. The requirement that ℬ / ℝ_+ ≥ 1 implies that isolated periodic orbits are excluded.
With these definitions in place, we now turn to the detailed description of one of the isochronous regions that we numerically identified in <ref>.
§ THE ISOCHRONOUS REGION IR4
In this section we select one of the numerically observed isochronous regions, describe its periodic orbits, and discuss its existence. In <ref> we consider the dynamical stability of the periodic orbits. Specifically, we focus on the orbits with T_P=4 appearing in the lower right corner of <ref>. We denote the corresponding isochronous region by IR4.
§.§ Description
We have verified, analytically and numerically, that all periodic orbits represented by these points can be parameterized by the firing time distances (FTD) (σ_1,σ_2,σ_3) of the three oscillators. We first prove the following slightly more general result which is also useful for determining the stability of the periodic orbits, see <ref>.
Consider the initial state of the pulse coupled 3-oscillator network on the Poincaré surface of section θ_3 = 0, determined by the phases (θ_1,θ_2,0) and the firing time distances ({σ_1},{σ_2},{σ_3}) satisfying:
* 0 < σ_2 < σ_1 < σ_3 < τ,
* H_* < θ_1 + τ - σ_3 < 1,
* H_* < H(θ_2+τ-σ_3)-σ_1+σ_3 < 1,
* H_* < H(τ-σ_1)-σ_2+σ_1 < 1,
* H(σ_3 - σ_2) < 1.
Then the dynamics of the system induces the Poincaré map
G : (σ_1,σ_2,σ_3;θ_1,θ_2) ↦
(σ_3-σ_2,σ_1-σ_2,τ-σ_2;H(σ_3-σ_2),σ_1-σ_2).
We use the event sequence representation of the dynamics, see <cit.>. In particular, we denote by [P,(i_1,…,i_k),t] a pulse that will be received by the oscillators i_1,…,i_k after time t. We denote by [F,i,t] the event corresponding to the oscillator i firing after time t, further implying that θ_i = 1 - t. The initial condition given by phases (θ_1,θ_2,0) and firing time distances ({σ_1},{σ_2},{σ_3}) corresponds to the event sequence
[P,(1,2),τ-σ_3],
[P,(2,3),τ-σ_1],
[P,(1,3),τ-σ_2],
[P,(1,2),τ];
[F,1,1-θ_1],
[F,2,1-θ_2],
[F,3,1].
Note that we write pulse events separately from fire events, keeping the time ordering in each of the subsets. In particular, this implies that 0 < σ_2 < σ_1 < σ_3 < τ and that 0 < θ_2 < θ_1 < 1.
The inequality θ_1 + τ - σ_3 < 1 implies that the first pulse event will be processed first. Then the next event sequences will be
1⟶
[P,(1,2),0],
[P,(2,3),σ_3-σ_1],
[P,(1,3),σ_3-σ_2],
[P,(1,2),σ_3];
[F,1,1-θ_1-τ+σ_3],
[F,2,1-θ_2-τ+σ_3],
[F,3,1-τ+σ_3]
2⟶ [P,(2,3),σ_3-σ_1],
[P,(1,3),σ_3-σ_2],
[P,(1,2),σ_3];
[F,1,0],
[F,2,1-H(θ_2+τ-σ_3)],
[F,3,1-τ+σ_3].
Here we used the assumptions that θ_1 + τ - σ_3 > H_* and θ_2 + τ - σ_3 < H_*. The next event sequence is
3⟶ [P,(2,3),σ_3-σ_1],
[P,(1,3),σ_3-σ_2],
[P,(1,2),σ_3],
[P,(2,3),τ];
[F,2,1-H(θ_2+τ-σ_3)],
[F,3,1-τ+σ_3],
[F,1,1]
4⟶ [P,(2,3),0],
[P,(1,3),σ_1-σ_2],
[P,(1,2),σ_1],
[P,(2,3),τ+σ_1-σ_3];
[F,2,1-H(θ_2+τ-σ_3)+σ_1-σ_3],
[F,3,1-τ+σ_1],
[F,1,1+σ_1-σ_3]
5⟶ [P,(1,3),σ_1-σ_2],
[P,(1,2),σ_1],
[P,(2,3),τ+σ_1-σ_3];
[F,2,0],
[F,3,1-H(τ-σ_1)],
[F,1,1+σ_1-σ_3]
6⟶ [P,(1,3),σ_1-σ_2],
[P,(1,2),σ_1],
[P,(2,3),τ+σ_1-σ_3],
[P,(1,3),τ];
[F,3,1-H(τ-σ_1)],
[F,1,1+σ_1-σ_3],
[F,2,1].
The inequality H(θ_2+τ-σ_3)-σ_1+σ_3 <1 implies again that the first pulse event was processed first and then H(θ_2+τ-σ_3)-σ_1+σ_3 > H_* that oscillator 2 fires. Moreover, the assumption τ - σ_1 < H_* ensures that oscillator 3 does not fire.
The next event sequence is
7⟶ [P,(1,3),0],
[P,(1,2),σ_2],
[P,(2,3),τ+σ_2-σ_3],
[P,(1,3),τ+σ_2-σ_1];
[F,3,1-H(τ-σ_1)+σ_2-σ_1],
[F,1,1+σ_2-σ_3],
[F,2,1+σ_2-σ_1]
8⟶ [P,(1,2),σ_2],
[P,(2,3),τ+σ_2-σ_3],
[P,(1,3),τ+σ_2-σ_1];
[F,3,0],
[F,1,1-H(σ_3-σ_2)],
[F,2,1+σ_2-σ_1].
Here, by the assumptions H_* < H(τ-σ_1)-σ_2+σ_1 < 1 and H(σ_3 - σ_2) + σ_2< 1, we have
9⟶ [P,(1,2),σ_2],
[P,(2,3),τ+σ_2-σ_3],
[P,(1,3),τ+σ_2-σ_1],
[P,(1,2),τ]
;
[F,1,1-H(σ_3-σ_2)],
[F,2,1+σ_2-σ_1],
[F,3,1],
thus proving the statement.
Let Ω_ε,τ be the subset of the (σ_1,σ_2,σ_3)-space defined by the relations
0 < σ_2 < σ_1 < σ_3 < τ,
H_* ≤ F_k(σ; τ) ≤ 1, k=1,2,3,4,
where
F_1(σ;τ) := H(σ_1)+τ-σ_3,
F_2(σ;τ) := H(τ-σ_3+σ_2)+σ_3-σ_1,
F_3(σ;τ) := H(τ-σ_1)+σ_1-σ_2,
F_4(σ;τ) := H(σ_3-σ_2)+σ_2,
and
H_* = H_1^-1(1) = e^b-e^bε̂/(e^b-1)e^bε̂.
Moreover, define the map
S : (σ_1, σ_2, σ_3)
↦ (θ_1, θ_2, θ_3; {σ_1}, {σ_2}, {σ_3})
= (H(σ_1), σ_2, 0; {σ_1}, {σ_2}, {σ_3}),
from Ω_ε,τ to the space of initial conditions of the PCON. Then we prove the following statement.
Consider the initial state of the pulse coupled 3-oscillator network on the Poincaré surface of section θ_3 = 0, given by S(σ) for σ∈Ω_ε,τ. Then the map
g : ( σ_1, σ_2, σ_3 ) ↦ ( σ_3-σ_2, σ_1-σ_2, τ-σ_2 )
has the following properties:
* g(Ω_ε,τ) = Ω_ε,τ;
* G(S(σ)) = S(g(σ)) for all σ∈Ω_ε,τ, where G is the Poincaré map (<ref>).
First, one easily checks that if σ∈Ω_ε,τ then g(σ) ∈Ω_ε,τ and vice versa. Then, note that if σ∈Ω_ε,τ then S(σ) satisfies the conditions of Proposition <ref>. This implies
G(S(σ))
= (σ_3-σ_2,σ_1-σ_2,τ-σ_2; H(σ_3-σ_2),σ_1-σ_2)
= S(g(σ)).
Proposition <ref> shows that S intertwines the map g on Ω_ε,τ with the Poincaré map G. We then have the following description of the dynamics in Ω_ε,τ.
The map g on Ω_ε,τ has period 4, that is, g^4(σ) = σ for all σ∈Ω_ε,τ. The point σ_* := (τ/2,τ/4,3τ/4) ∈Ω_ε,τ is a fixed point of g, and points
σ∈Ω_ε,τ along the line parameterized by σ = σ_* + t (0,1,1), t ∈ℝ, are period-2 points of g.
The proof of the statement is a straightforward computation. Nevertheless, it is more enlightening to proceed in a different way. Let
σ = σ_* + s,
where s = (s_1,s_2,s_3). In terms of s, g becomes the linear map
g(s) = L s,
where
L =
[ 0 -1 1; 1 -1 0; 0 -1 0 ].
Clearly, s=0 is the only fixed point of L. One checks that L^2 acts as rotation by π about the line s = t(0,1,1), t ∈ℝ. Therefore, L^2 leaves this line invariant, and L^4 is the identity.
Proposition <ref> implies that the Poincaré map G has Poincaré period T_P = 4 for each S(σ), σ∈Ω_ε,τ. The evolution of the phases of the 3 oscillators for such orbits is depicted in <ref> and the detailed dynamics is given in <ref>. The set Ω_ε,τ also gives rise to periodic orbits with smaller minimal orbit period than T = 3τ. In particular, there is a line in Ω_ε,τ given by (σ_1,σ_2,σ_3) = (τ/2,σ_2,τ/2+σ_2) for which all points give rise to period T=3τ/2 orbits (T_P=2), see <ref>. One point along this line, having σ_2 = τ/4 gives rise to a period T=3τ/4 orbit (T_P=1), see <ref>.
§.§ Existence
Let _ε,τ = S(Ω_ε,τ ) be the embedding of Ω_ε,τ in the (θ;σ)-space. Moreover, let 𝒜_ε,τ = pr_θ(_ε,τ), where pr_θ: ℝ^6 →ℝ^2 is the projection to the (θ_1,θ_2)-plane. The set 𝒜_ε,τ is depicted in <ref> for (ε,τ)=(0.58,0.58). One can check that Ω_ε,τ and 𝒜_ε,τ have non-empty interior for (ε, τ) = (0.58, 0.58) and that each point in _ε,τ is the initial condition of a periodic orbit in IR4 of <ref> with period T = 3τ and Poincaré period T_P = 4 . Therefore, _ε,τ is a periodic plateau.
The isochronous region IR4 of <ref> exists in the subset of the parameter space (ε, τ) given by
H_* ≤ H( τ/2) + τ/4≤ 1,
see <ref>.
Note that
1/4∑_k=1^4 F_k(σ;τ) = H(τ/2) + τ/4.
This implies that Eq. (<ref>) is a necessary condition for Eq. (<ref>) to hold. We show that if Eq. (<ref>) holds then Ω_ε,τ contains a non-empty open subset. Consider the point
σ_* = (τ/2, τ/4, 3τ/4).
Then σ_* ∈Ω_ε,τ if and only if Eq. (<ref>) holds, since in this case we have F_k(σ_*;τ) = H(τ/2) + τ/4, for k=1,…,4. Therefore, when Eq. (<ref>) holds, Ω_ε,τ∅. Moreover, when the strict form of Eq. (<ref>) holds, there is an open neighborhood U of D_τ in σ-space such that U ⊂Ω_ε,τ.
<ref> shows the volume of Ω_ε,τ for (ε,τ) ∈ [0,1]^2. The volume is computed using the Mathematica function .
If H(τ/2)+τ/4 = H_* or H(τ/2)+τ/4=1, the inequalities (<ref>) are satisfied only by the point σ_*. <ref> shows the phases for an orbit starting from the point σ_* when (ε,τ) moves outside the region of existence of IR4. In that case the dynamics converges in short time to a stable periodic orbit with T_P=1.
<ref> compares the analytically obtained 𝒜_ε,τ for (ε,τ) = (0.58,0.58) to the numerical results discussed in <ref>. We note that the numerically obtained orbits cover only part of 𝒜_ε,τ. This can be explained by the fact that the space of initial conditions that we scanned in our numerical experiments does not include the periodic orbits in IR4. Some of the orbits in IR4 are periodic attractors for our numerical initial conditions but others are not accessible. This effect is much more pronounced for (ε,τ) = (0.45,0.45) as is shown in <ref>. In this case our initial numerical experiments did not reveal the existence of any orbits in IR4. Nevertheless, in this case Ω_ε,τ is non-empty and subsequent numerical experiments with different initial conditions allowed us to numerically find orbits in IR4.
§.§ Stability
In this section we consider the stability of the periodic orbits in IR4. We show that, for σ∈Ω_ε,τ, small changes in θ lead to the same periodic orbit while small changes in σ lead to a nearby periodic orbit in the same pulse equivalence class. In particular, we have the following result.
Let (θ,σ) = S(σ), σ∈Ω_ε,τ, and denote by Y(σ) the corresponding periodic orbit in IR4. Then for Δθ and Δσ sufficiently small, the orbit with initial condition (θ+Δθ,σ+Δσ) converges in one iteration of the Poincaré map to the periodic orbit Y(σ+Δσ).
This statement is a straightforward consequence of Proposition <ref>. Since Ω_ε,τ is open in ℝ^3, given σ∈Ω_ε,τ, there is an open neighborhood U ∋σ with U ⊆Ω_ε,τ. Therefore, for Δσ small enough we have σ + Δσ∈Ω_ε,τ. Therefore, (θ',σ+Δσ) = S(σ + Δσ) satisfies the conditions of Proposition <ref>. This implies that for sufficiently small Δθ we have that (θ'+Δθ',σ+Δσ) also satisfies the conditions of Proposition <ref> giving convergence to Y(σ+Δσ). Finally, we note that Δθ' can be made sufficiently small by making Δθ sufficiently small because of the continuity of the map S.
§ OTHER ISOCHRONOUS REGIONS
The isochronous region IR4 is not the only such region that appears in the system under consideration here. Here we briefly report on two other such regions.
§.§ The isochronous region IR3
The isochronous region IR3 consists of periodic orbits with Poincaré period T_P=3. For orbits in IR3, two of the oscillators have the same phase. This implies that the projection of orbits in IR3 to the (θ_1,θ_2)-plane lies either on one of the axes or along the diagonal.
Let Ω_ε,τ be the subset of the (σ_1,σ_3)-space defined by the relations
0 < σ_1 < σ_3 < τ,
H_* ≤ F_k(σ; τ) ≤ 1, k=1,2,3,
H_**≤ F_k(σ; τ) ≤ 1, k=4,5,6,
where
F_1(σ;τ) := H(σ_2)+τ-σ,
F_2(σ;τ) := H(τ-σ_3)+σ_3-σ_2,
F_3(σ;τ) := H(σ_3-σ_2)+σ_2,
F_4(σ;τ) := σ_3,
F_5(σ;τ) := τ-σ_2,
F_6(σ;τ) := τ+σ_2-σ_3,
and
H_** = H_2^-1(1) = e^b-e^2bε̂/(e^b-1)e^2bε̂.
Then we consider in state space the set S(Ω_ε,τ) where S is given by
S : (σ_1, σ_3)
↦ (θ_1, θ_2, θ_3; {σ_1}, {σ_2}, {σ_3})
= (σ_1, σ_1, 0; {σ_1}, {σ_1}, {σ_3}).
The region Ω_ε,τ and the projection 𝒜_ε,τ of S(Ω_ε,τ) on the (θ_1,θ_2)-plane are shown in <ref>.
Using similar arguments as in the analysis of IR4 we find that the point
σ_* = ( σ_1, σ_3)
= (τ/3, 2τ/3),
gives a periodic orbit with Poincaré period T_P=1. Its phase evolution is shown in <ref>. Moreover, we find that this occurs for
H_* ≤ H( τ/3) + τ/3≤ 1,
thus giving the subset of the parameter space (ε, τ) for which IR3 exists, see <ref>.
§.§ The isochronous region IR5
For the isochronous region IR5, corresponding to periodic orbits with Poincaré period T_P=5 we consider the subset Ω_ε,τ of the (σ_1, σ_2^(1), σ_2^(2), σ_3)-space defined by the relations
0 < σ_2^(1) < σ_1 < σ_3 <σ_2^(2) < τ,
H_* ≤ F_k(σ; τ) ≤ 1, k=1,2,3,4,5,
where
F_1(σ;τ) := H(σ_2^(1))+τ-σ_3,
F_2(σ;τ) := H(τ-σ_2^(2))+σ_2^(2)-σ_1,
F_3(σ;τ) := H(σ_2^(2)-σ_3)+σ_3-σ_2^(1),
F_4(σ;τ) := H(σ_3-σ_1)+σ_1,
F_5(σ;τ) := H(σ_1-σ_2^(1))+σ_2^(1)+τ-σ_2^(2).
Then the set of initial states comprising IR5 is S(Ω_ε,τ) where S is given by
S : (σ_1, σ_2^(1),σ_2^(2), σ_3)
↦( θ_1, θ_2, θ_3 ; {σ_1}, {σ_2^(1),σ_2^(2)}, {σ_3})
= ( H(σ_1-σ_2^(1))+σ_2^(1), H(σ_2^(1)), 0 ;
{σ_1}, {σ_2^(1),σ_2^(2)}, {σ_3}).
The projection of S(Ω_ε,τ) on the (θ_1,θ_2)-plane is shown in <ref>.
Using similar arguments as in the analysis of IR4 we find that the point
σ_*
= ( σ_1, σ_2^(1), σ_2^(2), σ_3 )
= ( 2τ/5, τ/5, 4τ/5, 3τ/5).
gives a periodic orbit with Poincaré period T_P=1. Its phase evolution is shown in <ref>. Moreover, we find that this occurs for
H_* ≤ H( τ/5) + 2τ/5≤ 1,
thus giving the subset of the parameter space (ε, τ) for which IR5 exists, see <ref>.
§ CONCLUSIONS
We have reported the existence of non-trivial isochronous dynamics in pulse coupled oscillator networks with delay. In particular, we have presented numerical evidence for the existence of such isochronous regions and we have proved their existence for a subset of the parameter space (ε,τ) with non-empty interior. Moreover, we have described in detail the dynamics and stability of orbits in one of the isochronours regions that we call IR4.
The appearance of isochronous regions in pulse coupled oscillator networks with delays demonstrates the capacity of such systems for generating non-trivial dynamics that one would not, in general, expect for smooth dynamical systems. Of particular interest here is that isochronous dynamics coexists with attracting isolated fixed points and periodic orbits. This may be of interest for applications using heteroclinic connections between saddle periodic orbits as representations of computational tasks <cit.>.
Several questions regarding isochronous regions in pulse coupled oscillator networks with delay remain open. The main questions going forward is whether such dynamics exist for larger numbers of oscillators and whether such dynamics persists in networks with non-identical oscillators or different network structure.
§ ACKNOWLEDGEMENTS
This work was completed when P.L., supported by the China Scholarship Council, worked as a visiting PhD student at the University of Groningen. W.L. was supported by the NSFC (Grants no. 11322111 and no. 61273014). K.E. was supported by the NSFC (Grant no. 61502132) and the XJTLU Research Development Fund (no. 12-02-08).
|
http://arxiv.org/abs/1701.07718v2 | 20170126143132 | Influence of Fock exchange in combined many-body perturbation and dynamical mean field theory | [
"Thomas Ayral",
"Silke Biermann",
"Philipp Werner",
"Lewin Boehnke"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
Institut de Physique Thorique (IPhT), CEA, CNRS, UMR 3681, 91191 Gif-sur-Yvette, France
Department of Physics and Astronomy, Rutgers University, Piscataway, NJ 08854, USA
Centre de Physique Thorique, Ecole Polytechnique, CNRS, Universit Paris Saclay, 91128 Palaiseau, France
Department of Physics, University of Fribourg, 1700 Fribourg, Switzerland
Department of Physics, University of Fribourg, 1700 Fribourg, Switzerland
In electronic systems with long-range Coulomb interaction, the nonlocal Fock exchange term has a band-widening effect. While this effect is included in combined many-body perturbation theory and dynamical mean field theory schemes, it is not taken into account in standard extended DMFT (EDMFT) calculations.
Here, we include this instantaneous term in both approaches and investigate its effect on the phase diagram and dynamically screened interaction.
We show that the largest deviations between previously presented EDMFT and GW+EDMFT results originate from the nonlocal Fock term, and that the quantitative differences are especially large in the strong-coupling limit.
Furthermore, we show that the charge-ordering phase diagram obtained in GW+EDMFT methods for moderate interaction values is very similar to the one predicted by dual boson methods that include the fermion-boson or four-point vertex.
Influence of Fock exchange in combined many-body perturbation and dynamical mean field theory
Lewin Boehnke
=================================================================================================
Dynamical Mean Field Theory<cit.> (DMFT)
self-consistently maps a correlated Hubbard lattice problem with local interactions onto an effective impurity problem
consisting of a correlated orbital hybridized with a noninteracting
fermionic bath. If the bath is integrated out, one obtains an impurity action with retarded hoppings.
Extended dynamical mean field theory<cit.>
(EDMFT)
extends the DMFT idea to systems with long-range interactions. It does so by mapping a lattice problem with long-range interactions onto an effective impurity model with
self-consistently determined fermionic and bosonic baths, or, in the action formulation, an impurity model with retarded hoppings and retarded interactions.
While EDMFT captures dynamical screening effects and charge-order instabilities, it has been found to suffer from qualitative shortcomings in finite dimensions.
For example, the charge susceptibility computed in EDMFT does not coincide with the derivative of the average charge with respect to a small applied field<cit.>, nor does it obey local charge conservation rules<cit.> essential for an adequate description of collective modes such as plasmons.
The EDMFT formalism has an even more basic deficiency: since it is based on a local approximation to the self-energy, it does not include even the first-order nonlocal interaction term, the Fock term. The combined GW+EDMFT
<cit.> scheme corrects this by supplementing the local self-energy from EDMFT with the nonlocal part of the GW diagram, where G is the interacting Green's function and W the fully screened interaction.
Indeed, the nonlocal Fock term “[Gv]^nonloc” is
included in the nonlocal “[GW]^nonloc” diagram. As described in more detail in Ref. Ayral2013 (see also the appendix of Ref. Golez2017), the GW+EDMFT method
is formally obtained by constructing an energy functional of G and W, the Almbladh<cit.> functional Ψ, and by approximating Ψ as a sum of two terms, one containing all local diagrams (corresponding to EDMFT), the other containing the simplest nonlocal correction (corresponding to the GW approximation<cit.>). This functional construction rules out double-counting of local terms in the self-energy and polarization<cit.>.
Even though it has been introduced under the name GW+DMFT
<cit.> in the literature, we will denote this full scheme by GW+EDMFT
to emphasize that it is based on the EDMFT formalism, and to distinguish it
from simplified implementations without two-particle self-consistency,
which have appeared in the literature (and which we will denote in the
following as GW+DMFT).
In a recent implementation of the GW+EDMFT method, Ref. Ayral2013, and related papers<cit.>, the nonlocal Fock term was omitted.<cit.>
Here, we explore and
highlight the role of this term and its interplay with the local correlations. We quantify the band-widening effect of the Fock term and study the consequences of its presence or absence on various observables, and on the charge-order phase boundary.
Our self-consistent implementation goes beyond previous studies of the effect of the Fock exchange
in realistic calculations, where it was studied systematically
within GW <cit.> and GW+DMFT,<cit.>
albeit not in a self-consistent way.
The manuscript is organized as follows:
In section <ref>, we recap the GW+EDMFT equations with special emphasis on the Fock term and make general statements about the expected impact.
In section <ref>, we show explicit results for the effective renormalization of the band structure by the instantaneous Fock contribution within GW+EDMFT, followed by systematic comparisons
with the results of simplified
formalisms in section <ref>.
Section <ref> discusses the role of the Fock term in the Mott-insulating phase, where it stays relevant up to very large values of the on-site interaction.
Finally, in section <ref>, we compare our results with
results obtained within
the recent dual boson method.
§ FORMALISM
We aim at solving the extended Hubbard model on the two-dimensional square lattice
by constructing an effective impurity problem that gives the local part of the self-energy Σ and polarization P, and a diagrammatic expansion in their nonlocal components.
The model is defined by
the Hamiltonian
ℋ=-∑_ijt_ijc^†_ic_j+1/2∑_ij v_ijn_in_j-μ∑_in_i.
Here,
t_ij are the real-space hopping matrix elements, c_i^(†) the electronic annihilator (creator) on site i, v_ij the Coulomb interaction and n_i=c^†_ic_i.
We will
restrict ourselves to models with hoppings and interactions between nearest neighbors and next nearest neighbors only,
t_ij= tδ_<ij>+t'δ_<<ij>>,
v_ij= Uδ_ij+Vδ_<ij>+V'δ_<<ij>>,
where δ_ij is the usual Kronecker delta, δ_<ij>(resp. δ_<<ij>>) is 1 for i and j nearest neighbors (resp. next-nearest neighbors, along the diagonal of the square lattice) and 0 otherwise.
This results in the Fourier transforms
ε_k⃗=+ 2t(cos(k_x)+cos(k_y))
+2t'(cos(k_x+k_y)+cos(k_x-k_y))
and
v_q⃗=U +2V(cos(q_x)+cos(q_y))
+2V'(cos(q_x+q_y)+cos(q_x-q_y)).
The full expression for the self-energy in the GW+EDMFT approximation is
Σ(k⃗,iω_n)=Σ_imp(iω_n)+Σ_GW_c^nonloc(k⃗,iω_n)+Σ_F^nonloc(k⃗).
The last two terms correspond to the nonlocal part of the GW self-energy. They can be expressed as a function of imaginary time τ
and momentum k⃗ as follows:
Σ_GW_c^nonloc(k⃗,τ) = -∑_q⃗G_q⃗+⃗k⃗(τ)W^c_q⃗(τ)
+[∑_q⃗G_q⃗+⃗k⃗(τ)W_q⃗^c(τ)]_loc,
Σ_F^nonloc(k⃗) = -∑_q⃗G_q⃗+⃗k⃗()v_q⃗
+[∑_q⃗G_q⃗+⃗k⃗()v_q⃗]_loc.
Fourier transformations between τ and fermionic [bosonic] Matsubara frequencies iω_n=i(2n+1)π/β [iν_m=i2mπ/β] are assumed where needed (β denotes the inverse temperature).
The “loc” suffix denotes a sum over the first Brillouin zone.
G_k⃗(iω_n)=(iω_n+μ-ε_k⃗-Σ(k⃗,iω_n))^-1
is the interacting lattice Green's function, and W^c is defined as
W_q⃗^c(iν_m)≡v_q⃗/1-v_q⃗P_q⃗(iν_m)-v_q⃗,
with P the polarization function.
All results are given in units of D=4|t| (which is the half bandwidth when t'=0), and the momentum discretization is N_k=32×32
points in the first Brillouin zone,
unless otherwise stated.
We use the original formulation of the GW+EDMFT scheme,<cit.> corresponding
– within a functional formulation – to a Hubbard-Stratonovich decoupling
of the full interaction term, dubbed “HS-UV decoupling” in
Ref. Ayral2013.
As argued there, this choice has the advantage that it treats local and
nonlocal interactions on the same footing.
The nonlocal Fock term of Eq. (<ref>), which is real-valued and instantaneous, renormalizes the bandwidth. It can become quite large and momentum-dependent.
This is illustrated in Fig. <ref> for the parameters U=2, V=0.4 (and t'=0, V'=0, which is assumed in the following if not explicitly stated otherwise).
The figure also indicates that for the case of nearest-neighbor hopping and interaction, the nonlocal Fock term can be exactly absorbed into the bare dispersion Eq. (<ref>) by defining a U- and V-dependent hopping
t̃(t',U,V,V')=t+δ t(t',U,V,V').
This can be understood by looking at the real-space representation of the Fock term, Eq. (<ref>),
Σ_F^nonloc_ij
= -G_ij()v_ij + G_ii()v_iiδ_ij
= -G_<ij>()Vδ_<ij>
-G_<<ij>>()V'δ_<<ij>>,
where the notation <ij> [<<ij>>] denotes
a restriction to nearest-neighbor [next-nearest-neighbor] interactions.
Thus, for the case of Fig. <ref>, where V'=0, the Fock term enters Eq. (<ref>) as a renormalization of the nearest-neighbor hopping by
δ t(t',U,V,V')=-G_<ij>()V .
The Green's function term, which is closely related to the occupation number, has an implicit dependence on all parameters of the lattice problem.
In the presence of a next-nearest-neighbor interaction, t' also gets renormalized according to
δ t'(t',U,V,V')=-G_<<ij>>()V' .
Note that such a term breaks the particle-hole symmetry of the lattice. In the particle-hole symmetric case with t'=0 and half filling, the next-nearest-neighbor occupation term vanishes: G_<<ij>>()=0.
§ EFFECTIVE BANDSTRUCTURE
In the simplest case of nearest-neighbor interaction V, the nearest-neighbor hopping renormalization δ t determines the band widening (see Eq. (<ref>)). Since the half bandwidth is D=4|t|, the widening will be δ D=4δ t.
Figure <ref> illustrates this effect throughout the homogeneous part of the phase diagram for the particle-hole symmetric (t'=0) half filled case.
The most obvious feature is the increase with V, that is expected from Eq. (<ref>) and the decrease close to the Mott-insulating phase.
δ t nonetheless remains significant even at very high values of U, a property that will be further investigated in Section <ref>.
Away from half filling, where the Mott-insulating phase does not exist, the corresponding suppression of δ t disappears, but otherwise the dependence on V and U is very similar to the half-filled case, see bottom panel of Fig. <ref>.
To study a model with broken particle-hole symmetry, we introduce a nearest-neighbor hopping t'=t/√(2) and fix the filling at <n>=0.8 as well as the nearest-neighbor interaction V. We then calculate δ t as a function of the next nearest nearest-neighbor interaction V'. As shown in Fig. <ref>,
the main effect on the hopping renormalization comes from the (essentially linear) dependence on V.
The qualitative effect of V' is to slightly reduce the renormalization.
In order to make the connection to realistic electronic structure
calculations, we note as a side remark that there
the situation is slightly more subtle. The band-widening effect is indeed relative to the reference point. Let us consider three reference Hamiltonians,
(i) the Kohn-Sham Hamiltonian H_KS
of Density Functional Theory (DFT),
(ii) the Hartree Hamiltonian H_0 = H_KS - V[v_xc(r⃗)], where v_xc(r⃗) denotes the Kohn-Sham exchange-correlation potential, which is local in the electronic structure sense
(that is, “local” denotes a function depending only on one space variable f(r⃗), while “nonlocal” denotes a function depending on two variables f(r⃗,r⃗'⃗)),
(iii) the nonlocal exchange Hamiltonian H_xc^F = H_0 + V[v_xc^F(r⃗,r⃗'⃗)] (where v_xc^F(r⃗,r⃗'⃗) denotes an exchange-correlation potential including “nonlocal” Fock exchange).
Then the hierarchy of the bandwidths in a metallic system is
H_0 > H_xc^F > H_KS.
Thus, H_xc^F indeed widens the band with respect to density functional theory calculations. The question of the relative bandwidth changes
thus implies a question on the starting band structure.
We refer the interested reader to Ref. Hirayama2015
for a systematic construction of explicit low-energy many-body
Hamiltonians.
Here, we only comment on the specific point of the Hartree and Fock
terms, in order to put our work on the extended Hubbard Hamiltonian
into perspective with respect to realistic electronic structure calculations.
Indeed, as argued in Ref. Hirayama2015, in realistic
electronic structure calculations one needs to avoid double counting
of interactions at the one- and two-particle level.
Let us consider first the case of the Hartree terms: standard electronic
structure techniques (e.g. a DFT calculation) produce a band structure
including the Hartree contribution. This one-body potential contribution
is then already part of the effective hopping parameter determined
from this band structure.
Ref. Hirayama2015 explains how to avoid double counting
by including – at the level of the many-body calculation – only terms
beyond Hartree. Here, we do not need to address this point in detail, since
a Hartree term included in the model calculation would cancel out
with the corresponding shift of the chemical potential, since the
particle number is eventually determining the energetic level of the
single-orbital included in the present model.
Let us now move to the analogous question for the Fock term: one may examine the relevance of excluding it at the level of the many-body calculation, and keeping it at the level of the electronic structure calculation instead.
The answer is based on several elements: The first point to note
is that standard DFT calculations do treat exchange in a local
approximation (where “local” here means again “local in the electronic
structure sense”, see above), which
relies on an error cancellation effect with part of the correlation
contribution (see e.g. Ref. VanRoekeghem2014a) and is not relevant here.
The next question is therefore: why not start from a
Hartree-Fock calculation in the continuum in the full energy range
of the Coulomb Hamiltonian? Such a treatment would neglect the
crucial screening of the bare interaction by high-energy degrees of
freedom (typically, matrix elements of the bare Coulomb interaction
in the relevant Wannier functions are of the order of several tens
of electron volts, while the effective Hubbard interactions are usually a few electron volts).
Therefore, what is relevant here is indeed the exchange term calculated
using the effective bare interaction of the low-energy Hamiltonian.
For realistic electronic structure calculations, this interaction
should correspond to a partially screened interaction, where screening
by high-energy degrees of freedom is taken into account (as done e.g.
in the screened exchange + DMFT scheme
<cit.>).
We refer the interested reader to Refs. Hirayama2015, VanRoekeghem2014, VanRoekeghem2014a for details.
§ SIMPLIFIED VARIANTS OF GW+EDMFT
In the following, we study the effect of this V-dependent bandwidth renormalization on local observables as well as the critical value of the nearest-neighbor repulsion for the transition into the charge-ordered phase.
We will call “GW_c+EDMFT” the formula implemented in Ref. Ayral2013 (which contains only the GW_c
term, see Ref. Ayral2016a), and “GW+EDMFT” the formula with the self-energy expression
(<ref>). For comparison, we also show results for “Gv+EDMFT”,
a scheme where Σ is the sum of the impurity self-energy
and of the nonlocal Fock term only (the first and third terms of Eq. (<ref>)). In all three schemes, the polarization is
the sum of the impurity polarization with the nonlocal part of the
interacting bubble, as described in Ref. Ayral2013,
steps (5) (a) and (5) (b) of section V.
In Fig. <ref>, we plot the self-energy and polarization
obtained from the three schemes at different momenta, and we compare
the results to the local EDMFT self-energy and
polarization. All these results are for half-filling. One can observe the following
trends:
(i) While the imaginary part of the self-energy in GW_c+EDMFT
is larger than in EDMFT, the opposite is true for GW+EDMFT, i.e.,
the GW+EDMFT self-energy is less correlated than the EDMFT
self-energy.
(ii) The GW+EDMFT result is more strongly correlated than Gv+EDMFT.
(iii) At small Matsubara frequencies, the polarization is overall larger
in the GW/Gv+EDMFT method than in the GW_c+EDMFT.
The trend in the self-energy (i.e. less correlated in GW+EDMFT than
EDMFT) can be understood easily from the broadening effect of the
nonlocal Fock term on the band: when the (effective) bandwidth gets
larger, so does the polarization P, and hence screening effects
are more important, interactions are more screened and as a result,
the imaginary part of the Matsubara self-energy is smaller in absolute
magnitude.
Less trivial is the comparison between GW+EDMFT and Gv+EDMFT. Here, the band-widening effect
is included in both calculations, and it turns out that the additional nonlocal GW contributions to the self-energy lead to stronger correlations.
This is consistent with the conclusions of Ref. Ayral2013,
which compared GW_c+EDMFT to EDMFT.
In the top panels of Fig. <ref>, we replot panels (a) and (b) of Fig. 15 of Ref. Ayral2013, and we show, in the bottom panels, the same observables for U=3, V=1.
One sees that the deviation of the GW+EDMFT and Gv+EDMFT results from
the EDMFT result is very small for U=2, V=0.4, but sizable for larger interaction values
(U=3, V=1).
In Fig. <ref>, we
plot the corresponding spectral functions (the EDMFT and GW_c+EDMFT results are identical to Fig. 2(b) of Ref. Ayral2012).
As a logical consequence of the above observations, the GW+EDMFT and
Gv+EDMFT spectra are close to each other and slightly less correlated
than the EDMFT spectrum, in the sense that the integrated weight of
the quasiparticle peak is larger in those methods.
We next consider the phase diagram in the U-V plane.
In Fig. <ref>, we plot the dependence
of the inverse charge susceptibility χ^-1_q⃗=π,π(iν_m=0) on the nearest-neighbor repulsion V, with χ defined by
χ_q⃗(iν_m)=-Π_q⃗(iν_m)/1-Π_q⃗(iν_m)v_q⃗.
When the inverse susceptibility vanishes, the
charge susceptibility diverges, signaling a transition to a charge-ordered
phase with a checkerboard pattern. The corresponding phase diagram is shown in Fig. <ref>, where we plot the results from Fig. 5 of Ref. Ayral2013 together with the phase boundaries for the GW+EDMFT and Gv+EDMFT methods.
At low and intermediate U, GW+EDMFT and GW_c+EDMFT yield quantitatively similar critical nonlocal interactions V_c for the transition to the charge-ordered phase over a wide range of the local interactions. More importantly, they capture the expected GW behavior at low U that EDMFT misses due to its local self-energy.
In the strong-coupling limit, the value of V_c is substantially reduced (middle and bottom panels) when going from EDMFT to GW+EDMFT or even only
Gv+EDMFT (GW_c+EDMFT is very close to EDMFT).
§ INSIDE THE MOTT PHASE
In the large-interaction regime of the phase diagram (Fig. <ref>), the nonlocal Fock term has a significant effect. The schemes which lack this instantaneous contribution, EDMFT and GW_c+EDMFT, yield a larger and steeper phase boundary than the schemes that take the Fock term into account (Gv+EDMFT and GW+EDMFT).
The exact phase boundary in the Mott phase is difficult to predict a priori.
It can be computed in the classical (t_ij→ 0) and zero-temperature limit of the extended Hubbard model by exact Monte-Carlo simulations, as e.g. in Ref. Pawlowski2006, and is given by the analytical expression
V_c^int=U/4,
where 4 corresponds to the number of nearest neighbors.
This line is plotted as a dashed grey line in Fig. <ref>. This result can be obtained by a simple comparison between the interaction energies of the Mott-insulating phase and of the checkerboard phase. In the full-fledged model, finite temperature and quantum tunneling have to be taken into account. In the low-temperature regime (T=0.01) of Fig. <ref>, the deviation between the classical solution and the solution to the full quantum problem comes mostly from the quantum tunneling kinetic term.
To guess the influence of the quantum tunneling term, one may observe that the effect of temperature in the classical problem is to enhance the value of V_c, i.e to disfavor the charge-ordered phase over the Mott phase.<cit.> Since the quantum tunneling (hopping) has a physical effect similar to temperature in classical systems,<cit.>
namely to delocalize the particles
one may speculate that it will also lead to a higher V_c in the quantum case.
In fact, this feature is present by construction in the EDMFT and GW+EDMFT schemes. The denominator in the susceptibility (Eq. (<ref>)) imposes that the charge-ordering transition should occur for negative values of v_q⃗, since the polarization Π_q⃗ is always negative (for the parameters studied here). For the square lattice, this implies V_c>U/4, which is the classical energy estimate (Eq. (<ref>)). Therefore, by construction, in GW+EDMFT schemes, introducing hopping on the lattice will always favor the disordered phase.
This is indeed what is seen in all variants. We also observe that the method including most diagrams, GW+EDMFT, has a phase boundary which is much closer to the classical limit than the comparatively cruder EDMFT approximation.
In order to gain a better qualitative understanding of the large-U behavior, we have performed an analytical self-consistent estimation of the value of the band-widening effect δ t (defined in Eq. (<ref>)) coming from the Fock term. Approximating the self-energy as the sum of the atomic limit (in the spirit of the Hubbard-I approximation<cit.>) and of the Fock self-energy, as described in more detail in Appendix <ref>, we obtain
δ t=tV/2U-V.
Thus δ t may become arbitrarily
large if the nonlocal interaction coefficient exceeds twice the value of the local
one. However, even disregarding the fact that the generic case is
certainly the opposite one (local interactions in general exceed nonlocal
ones), one should be aware of the fact that in that case the Hubbard
I approximation, which is justified in the strong coupling limit,
would no longer be appropriate.
By inspecting the phase diagram in Fig. <ref>, we can parametrize the phase boundary in the large-U limit as a constant slope, i.e V_c=α U+β. [Within the U-range that we can simulate (we performed measurements up to U=8.0), we can estimate α=1.25 and β=-2.5.] With this parametrization,
we obtain
δ t_c=t(α U+β)/2U-α U-β.
Hence, the bandwidth
renormalization (proportional to δ t)
stays relevant in the vicinity of the charge-ordering transition
even at large U.
§ BEYOND GW+EDMFT: COMPARISON TO DUAL BOSONS AND TRILEX
As mentioned in the introduction, the EDMFT formalism suffers from certain conceptual problems, such as the lack of thermodynamic consistency and an unreliable description of collective modes. These shortcomings are alleviated in the recently developed dual boson (DB) method,<cit.> which, in its full-fledged implementation,<cit.> computes the susceptibility<cit.> after resumming an infinite number of ladder diagrams built from local impurity four-leg vertices.
These four-leg vertices, which are also central to the Dynamical Vertex Approximation<cit.> (which was recently shown to be a simplified version of QUADRILEX, a method consisting in an atomic approximation of the four-particle irreducible functional<cit.>),
can nonetheless only be obtained at a considerable computational expense and require a proper parametrization and treatment of their asymptotic behavior.<cit.> Consequently, it is not
possible to use them routinely in
multi-orbital calculations (see however <cit.>)
and lightweight improvements on EDMFT, especially with realistic applications in mind, are desirable. Recent attempts to forgo the computation of four-leg vertices include the TRILEX method<cit.> and simplified dual boson schemes such as DB+GW or DB+GWγ.<cit.> Whether they retain the abovementioned conserving properties, however, is yet unclear.
In fact, the results obtained in the simplified “dual” approaches that include at least the electron-boson vertex γ are similar to those obtained by the GW+EDMFT method, which is conceptually and practically simpler than dual methods, and has hence already been applied to realistic materials in a number of works.<cit.>
In Fig. <ref>, we compare the phase diagram for model (<ref>) obtained from various simplified variants<cit.> of the dual boson scheme, and compare it to GW_c+EDMFT, GW+EDMFT, and the GW approximation. We restrict this comparison to values of U below the Mott transition, for lack of available dual-boson results in the Mott-insulating phase.
Let us start with the small-U limit.
For U<1.5, the phase boundaries obtained in all the GW+EDMFT as well as GW alone are almost indistinguishable.
The GW transition is a straight line for all shown values, the variants with an impurity polarization have varying degrees of up-curvature, with the GW+EDMFT line in between the GW_c+EDMFT and the Gv+EDMFT line.
For Gv+EDMFT it is notable that the U→ 0 limit does not reproduce the GW value.
The dual boson lines start with a similar upwards trend, with the exception of the DB-GW line, which follows essentially the HS-V variant of
GW+EDMFT
(more properly denoted as GD + second order perturbation theory (SOPT) + EDMFT)
as shown in Ref. Stepanov2016,
and discussed in detail at the end of this section.
Yet, the dual-boson variants start out with a lower slope, indicating stronger ordering tendencies already for the lowest values of U, while all GW+EDMFT variants follow the slope of the `weak-coupling' GW boundary in the vicinity of U=0.[Note that the dual-boson variants and HS-V calculations of Ref. Stepanov2016 have been executed as a single lattice self-consistency iteration on top of the converged EDMFT solution.]
Interestingly, a very recent cluster-EDMFT study <cit.>
reports a similarly reduced slope in the weak-U regime.
We also note that only the full DB critical line is above the U/4 line (dashed grey line, discussed in section <ref>), while the (non-self-consistent) DB-GW and DB-GWγ results are not (the latter only slightly so). Further comparisons of DB with GW+EDMFT (in the HS-V decoupling) can be found in Ref. Stepanov2016, see Fig. 8.
For U>2, the phase boundary for GW+EDMFT is below the dual-boson phase boundary, while GW is even a bit lower.
For GW_c+EDMFT, the U=2.5 point already falls in the Mott-insulating phase and is not shown in the comparison. The too low Mott-transition line for GW_c+EDMFT is not surprising, since it lacks the band-widening of the nonlocal Fock term.
Two possible decouplings were previously discussed in the literature, “HS-UV”
(giving rise to GW+EDMFT) and “HS-V” (
resulting in a combined “GD+SOPT+EDMFT” scheme,
where D is the screened non-local interaction,
see Ref. Ayral2013).
We emphasize that, contrary to the HS-UV variant and e.g. the random-phase approximation (RPA), the HS-V variants do not resum, in the nonlocal part of the self-energy, the local (U) and nonlocal (V) parts of the interaction to the same order. (They are resummed, respectively, to second and infinite order.) This arbitrary inconsistency raises questions concerning the soundness of the HS-V scheme, as already pointed out in Ref. Ayral2013.
Recent works have indeed confirmed the deficiency of “HS-V”.
For instance, all the
“EDMFT+GW”-related variants shown in Fig. 7 of Ref. Stepanov2016, which were obtained in a HS-V flavor, yield phase boundaries which are much lower than either DB or the GW+EDMFT results in Fig. <ref> (which correspond to the HS-UV variant of the decoupling of the interaction). The only additional outlier is the simplest DB type of approximation, the DB-GW variant, for which Ref. Stepanov2016 showed that it is formally similar to a HS-V calculation.
More recently, Ref. Terletska2016 used cluster dynamical mean field theory to study the extended Hubbard model, which allows a control on errors by increasing the size of the cluster (but neglecting inter-cluster interactions, which in the case of EDMFT and GW+EDMFT are treated via the retarded impurity interactions). These cluster results were shown to be in poor agreement with “HS-V”, but very close to the full-fledged DB method. In the lower panel of Fig. <ref>, we show that the GW+EDMFT (HS-UV) method yields a critical V_c in agreement (with a 20% accuracy or better) with the cluster results, a remarkable result in view of the reduced numerical cost of this method compared to cluster DMFT. Comparisons for the larger U values in Fig. <ref> would be of great interest.
We end this section by examining two further questions, namely the influence of spin fluctuations and the impact of local vertex corrections. One can expect that neither are important for the charge-ordering instability under study, since (i) this is an instability in the charge channel, not the spin channel, and (ii) as V increases towards charge ordering, the effective static interaction 𝒰(ω=0) decreases to zero,<cit.> making the system behave more and more like a weakly-correlated metal, where vertex corrections are expected to be small.
In all previous implementations of the GW+EDMFT method, the interaction was formally decoupled in the charge-channel only, neglecting the possible influence of spin fluctuations. Furthermore, in GW+EDMFT, the influence of the local vertex on the nonlocal self-energy is included only through the nonlocal Green's function.
In the TRILEX approximation, both charge and spin fluctuations are taken into account, as well as local vertex corrections to the nonlocal self-energy.
We can thus answer both questions of interest by implementing the TRILEX method for the extended Hubbard model. We refer the reader to Refs. Ayral2015,Ayral2015c for implementation details. The only difference with respect to the application to the Hubbard model is that Eq. (41) of Ref. Ayral2015c must be modified to also describe nonlocal interactions, which means that Eq. (61b) of that publication becomes
W^η(𝐪,iΩ) = v^η(𝐪)/1 - v^η(𝐪)P^η(𝐪, iΩ) ,
where η denotes the charge (ch) or spin (sp) channel and
v^ch(𝐪) = U^ch + 2 V(cos(q_x)+cos(q_y)),
v^sp(𝐪) = U^sp,
and the bare on-site interactions in the charge and spin channels are parametrized, in the so-called Heisenberg decoupling,<cit.> by a parameter α:
U^ch=(3α-1)U, U^sp=(α-2/3)U.
In Fig. <ref>, we show TRILEX results for two characteristic points of the phase diagram, namely U=1 (characteristic of the metallic phase), and U=3 (characteristic of the Mott phase).
First, we observe that the critical V_c (computed by looking for a vanishing inverse static susceptibility, shown in the left panels) is quite close to that of GW+EDMFT, justifying our a priori intuition. This agreement is quite remarkable, since GW+EDMFT has only charge fluctuations, while TRILEX has both charge and spin fluctuations.
Second, V_c only mildly depends on the ratio of the charge to spin fluctuations, as can be seen in the right panels, where quite large variations of U^ch (and correspondingly U^sp) lead to comparatively small variations in V_c.
Taking inspiration from the comparison of the cluster extension of TRILEX with exact benchmark results for the two-dimensional Hubbard model<cit.> (there, one observes that whenever the TRILEX solution is close to the exact solution, the dependence on the decoupling is weak), this stability (compared to charge-only GW+EDMFT, and with respect to α) can be used as a proxy for the quantitative robustness of the present GW+EDMFT results.
§ CONCLUSION
In conclusion, we have shown that the nonlocal Fock term
has a significant influence on the description of the charge fluctuations
in the GW+EDMFT method, especially in the strong-coupling limit. By effectively enhancing the bandwidth, it
lowers the critical value of the nonlocal interaction for the charge-ordering transition.
We have also shown that the differences between the EDMFT and GW+EDMFT phase
diagrams are to a large extent a consequence of the nonlocal Fock
term, which is not included in EDMFT.
Another interesting result is that the simple extension from EDMFT to a Gv+EDMFT
formalism yields results similar to the full-fledged GW+EDMFT method. This suggests the possibility of studying complex multiband materials, where a full GW+EDMFT computation would be too costly, using techniques in the spirit of the recent Screened exchange + dynamical DMFT (SEx+DMFT)
method.<cit.>
In realistic materials, the simple single-band description is not
sufficient, and substantial screening effects resulting from the presence
of higher energy degrees of freedom must be taken into
account.<cit.>
Performing a self-consistent calculation of the screening by these higher-energy states is however computationally expensive, even within a multi-tier approach,<cit.> where the updates are restricted to an intermediate energy window. A scheme which combines a properly renormalized bandstructure with a self-consistent treatment of screening effects within the low-energy subspace may provide a good basis for
tractable, but still accurate first principles
electronic structure methods for correlated electron materials.
We acknowledge useful discussions with Y. Nomura, A. I. Lichtenstein, E. Stepanov
and E. van Loon. We thank A. Huber and A. I. Lichtenstein for providing us the dual-boson data for Fig. <ref>, as well as H. Terletska for providing us the DCA data for Fig. <ref>.
This research was supported by SNSF through NCCR Marvel,
IDRIS/GENCI Orsay (project number t2016091393),
ECOS-Sud MinCYT under project number A13E04, and the
European Research Council (project number 617196). Part of the implementation is based on the TRIQS toolbox<cit.> and on
the ALPS libraries.<cit.>
§ ESTIMATION OF THE BANDWIDTH WIDENING WITH COMBINED HUBBARD-I AND FOCK
In this appendix we discuss a simple Hubbard I (plus Fock) type treatment of the U-V model.
These arguments are not meant to be exact or comprehensive, most notably we ignore the effect of the nearest-neighbor interaction on the local self-energy and any nonlocal screening, but they provide useful insights into the nontrivial nature of the large-U and large-V limit.
We start with an approximation to the self-energy which follows the
spirit of the Hubbard-I approximation by taking the atomic U^2/4z self-energy
locally, but goes beyond it by taking also the instantaneous nonlocal Fock contribution into account:
Σ(k,z)=U^2/4z+2δ t(cos k_x+cos k_y)
The corresponding Green's function reads:
G(k,z)=1/z-ε̃_k-U^2/4z
with ε̃_k denoting the effective
dispersion including the Fock term:
ε̃_k≡2(t+δ t)(cos k_x+cos k_y)
Thus, we can write:
G(k,z)=z/(z-z_+(k))(z-z_-(k))
with
z_±(k)=ε̃_k±√(ε̃_k^2+U^2)/2.
As expected, in the atomic limit (ε̃_k→0),
the function has two peaks at ± U/2, corresponding to the two
Hubbard bands.
We can decompose the expression of Eq. (<ref>) as:
G(k,z)=A_+(k)/z-z_+(k)+A_-(k)/z-z_-(k)
with
A_±(k)≡1/2(1±ε̃_k/√(ε̃_k^2+U^2)).
A_+ and A_- are the weights of the upper and lower Hubbard
bands, respectively. Using Eq. (<ref>), one writes the
spectral function as:
A(k,ω)=A_+(k)πδ(ω-z_+(k))+A_-(k)πδ(ω-z_-(k))
Under the assumption that the Hubbard bands are well separated (U
large enough), only the lower Hubbard band contributes to the occupancy
(at T=0 for simplicity):
n_k=∫_-∞^0dω/πA(k,ω)≈ A_-(k)≈1/2(1-ε̃_k/U)
In the second equality, we have again used the fact that U is large
enough (to neglect ε̃_k^2 in
the square root).
On the other hand, the occupancy is related to G(k,τ=0^+)
in the following way:
n_k=1+G_k(τ=0^+)
Lastly, δ t (in Eq. (<ref>)) is also known,
in the Fock approximation, as a function of G_ij(τ=0^+) (see Eq. (<ref>)):
δ t=-VG_⟨ ij⟩(τ=0^+)
Putting (<ref>-<ref>-<ref>)
together and Fourier transforming, one gets
G_⟨ ij⟩(τ=0^+)=1/2(-t-VG_⟨ ij⟩(τ=0^+)/U).
Solving for G_⟨ ij⟩(τ=0^+), one gets:
G_⟨ ij⟩(τ=0^+)=-t/2U-V
and
δ t=tV/2U-V.
This expression is used in Section <ref>.
apsrev4-1
|
http://arxiv.org/abs/1701.07749v1 | 20170126155446 | Mølmer-Sørensen entangling gate for cavity QED systems | [
"Hiroki Takahashi",
"Pedro Nevado Serrano",
"Matthias Keller"
] | quant-ph | [
"quant-ph"
] |
[]ht74@sussex.ac.uk
Department of Physics and Astronomy, University of Sussex,
Brighton, BN1 9QH, United Kingdom
The Mølmer-Sørensen gate is a state-of-the-art entangling gate in the ion trap quantum computing where the gate fidelity can exceed 99%. Here we propose an analogous implementation in the setting of cavity QED. The cavity photon mode acts as the bosonic degree of freedom in the gate in contrast of that played by a phonon mode in ion traps. This is made possible by utilising cavity assisted Raman transitions interconnecting the logical qubit states embedded in a four-level energy structure, making the “anti-Jaynes-Cummings” (AJC) term available under the rotating-wave approximation. We identify practical sources of infidelity and discuss their effects on the gate performance. Our proposal not only demonstrates an alternative entangling gate scheme but also sheds new light on the relationship between ion traps and cavity QED, in the sense that many techniques developed in the former are transferable to the latter through our framework.
Mølmer-Sørensen entangling gate for cavity QED systems
Matthias Keller
18 november 2016
======================================================
§ INTRODUCTION
Currently trapped atomic ions are among the most successful platforms for quantum information processing (QIP). A number of quantum algorithms <cit.>, entanglement of up to 14 ions <cit.> and quantum simulation of spin systems <cit.> have been demonstrated, to name a few. Many of those achievements rely on the realization of high fidelity two-qubit entangling gates known as Mølmer-Sørensen (MS) <cit.> or the geometric phase gate <cit.> [The distinction between the Mølmer-Sørensen (MS) and geometric phase gate is usually made by the basis states they operate on. The MS-gate is in the form of σ_x⊗σ_x whereas the geometric phase gate is in σ_z⊗σ_z. In particular the latter does not flip the logical qubit states. Following this convention we call our gate Mølmer-Sørensen due to the derived form of the Hamiltonian (<ref>)]. These gates exploit a collective phonon mode shared by the ions to mediate a state-dependent force and induce a quantum phase conditioned on the collective atomic states. Notable characteristics of this gate scheme are 1) individual addressing of the ions is not required, 2) the time evolution is cyclic such that the electronic and phonon degrees of freedom become disentangled at certain times and 3) the scheme is insensitive to the ions' initial motional state <cit.>. Due to these favorable features, the gate can achieve a fidelity in excess of 99% <cit.>.
On the other hand, cavity QED is a paradigm where stationary quantum emitters (e.g. single atoms) interact with quantized radiation fields. It serves as a versatile platform for studies in quantum optics and for quantum information. Namely cavity QED systems are regarded as a vital building block in the development of quantum networks <cit.>. In the quantum network architecture, each network node is required to be a quantum register capable of multi-qubit quantum logic operations. Even though the recent experimental progress makes it possible to couple multiple qubits to a single optical cavity <cit.>, entangling gate operations within a single cavity QED system have not been demonstrated despite a number of theoretical proposals <cit.>.
In this article we propose an alternative implementation of a quantum entangling gate for cavity QED systems, which has a direct correspondence to the MS gate in ion traps. It is well known that trapped ions and cavity QED systems share similar physical compositions, i.e. effective spins coupled with a quantized bosonic mode <cit.>. The prime difference is that in ion traps both Jaynes-Cummings (JC) and anti-Jaynes-Cummings (AJC) Hamiltonians are naturally available by addressing the red and blue-sideband transitions of the ions respectively <cit.>, whereas in cavity QED normally only the JC Hamiltonian is available. However this restriction can be lifted by utilizing two cavity-assisted Raman transitions interconnecting qubit states embedded in a four-level energy structure. This was first discovered in conjunction with the realization of the Dicke model <cit.> and later used for studies of the Rabi model <cit.>, but it has never been discussed in terms of a quantum logic gate as per our knowledge.
Even though our proposal is directly inspired by the MS-gate, there are notable differences from the ion-trap implementation. Firstly cavity QED systems are essentially a single-mode system as opposed to the inherent multiple mechanical modes in a string of ions. Therefore our scheme is free from the issues in ion traps such as off-resonant excitations to irrelevant mechanical modes and spectral congestion in the mode structure with the increasing number of ions. Along the same line, there is no Lamb-Dicke parameter in our scheme, which means there is no compromise between the spatial localization of the qubits and the gate speed. Finally, the optical mode of a cavity can be regarded as being at zero temperature without the need of additional cooling. Hence our scheme does not suffer from the heating of the bosonic mode as it is often problematic in ion trap QIPs. However, optical cavities normally have non-negligible field decay rates.
In the following we refer to the individual stationary qubits in the cavity as “atoms” for the sake of convenience. However, in an actual implementation they do not need to be single atomic particles. Indeed they could be e.g. molecules, nitrogen-vacancy centers in diamond or artificial atoms such as semiconductor quantum dots, as long as they have the required energy structure and transitions addressable by a cavity and external laser fields (see <ref>). Therefore we expect that our proposal is relevant to a broad class of physical systems where direct interaction between the qubits is difficult to attain, but they can be indirectly coupled to each other via an optical cavity field.
The article is structured as follows: in Section <ref> we introduce the MS Hamiltonian. Section <ref> is devoted to the discussion of the cavity-induced Stark shift. The influence of the decay channels on the gate performance are presented in Section <ref>. Finally, in Section <ref> we summarize our main conclusions.
§ DERIVATION OF THE HAMILTONIAN
We consider an ensemble of N atoms coupled to a single cavity mode. All the atoms possess an identical four-level energy structure as shown in <ref>. The ground states |g⟩ and |e⟩ form a qubit whereas the excited states |r_1⟩ and |r_2⟩ mediate the coupling between the qubit states via cavity-assisted Raman transitions. The cavity frequency _c is near resonant with the transitions |g⟩↔|r_1⟩ and |e⟩↔|r_2⟩ with corresponding detunings Δ_C_1 and Δ_C_2. We assume that the coupling to the cavity mode is uniform among the atoms and at the same strength of a vacuum Rabi frequency 2g for both |g⟩↔|r_1⟩ and |e⟩↔|r_2⟩. (The latter condition is not essential and can be relaxed.).
In addition, the atoms are externally driven by two laser fields.
These two lasers off-resonantly drive the transitions |e⟩↔|r_1⟩ and |g⟩↔|r_2⟩ with Rabi frequencies Ω_1 and Ω_2, and detunings Δ_L_1 and Δ_L_2 respectively.
In this way a pair of Raman transitions is constructed to couple the qubit states. Each of them has a cavity induced transition on one arm and a laser-induced transition on the other. The two-photon detunings δ_1 and δ_2 are given by
δ_1 = Δ_C_1-Δ_L_1,
δ_2 = Δ_C_2-Δ_L_2.
The Hamiltonian for the total atom-cavity system is composed of the bare energy H_0 and the interaction Hamiltonian H_I, i.e. H = H_0 + H_I.
H_0 is (assuming ħ = 1)
H_0 = _c a^† a + ∑_i=1^N(_ggg + _eee.
. +_r_1r_1r_1+_r_2r_2r_2).
Here, a is the annihilation operator of the cavity photon, _ξ and ξξ (ξ=g,e,r_1,r_2) are the energy of the atomic level and the projector to the corresponding eigenstate of the ith atom respectively.
On the other hand, H_I is given by a sum of the interaction Hamiltonians for the individual atoms:
H_I = ∑_i=1^N[Ω_1/2(^-i_1tr_1e+).
.+Ω_2/2(^-i_2tr_2g+) .
.+g(ar_1g+ar_2e+)].
Here, _1 and _2 are the optical frequencies of the driving lasers and indicates Hermitian conjugate of the preceding term inside the same bracket.
In the interaction picture with respect to the operator
H_1 = _c a^† a+∑_i=1^N(_ggg + _eee.
. +(_r_1+Δ_1)r_1r_1+(_r_1+Δ_2)r_2r_2),
with
Δ_1 = Δ_C_1+Δ_L_1/2,
Δ_2 = Δ_C_2+Δ_L_2/2,
the new Hamiltonian, H'=U_1HU_1^†+iU_1^†dU_1/dt with U_1 = ^iH_1t, becomes
H' = ∑_i=1^N[-Δ_1r_1r_1-Δ_2r_2r_2.
+Ω_1/2(^iδ_1/2tr_1e+)
+Ω_2/2(^iδ_2/2tr_2g+)
.+g(^-iδ_1/2tar_1g+^-iδ_2/2tar_1g+)].
Assuming
Δ_1,2≫g, _1,2, δ_1,2,
the excited states |r_1⟩, |r_2⟩ can be adiabatically eliminated and we obtain an effective Hamiltonian:
H_eff = g^2/2(1/Δ_1+1/Δ_2)a^† a
+∑_i=1^N{_2^2/4Δ_2gg+_1^2/4Δ_1ee.
+g^2/2(1/Δ_2-1/Δ_1)a^† a(ee-gg)
.+g_1/2Δ_1(^-iδ_1 taeg+).
+. g_2/2Δ_2(^-iδ_2 tage+)}.
The first two terms inside the sum over the atom index i correspond to the ac Stark shifts caused by the driving lasers. These terms in addition to the first term in (<ref>) shifts the bare energy eigenfrequencies with constant offsets. Thus they can be removed from the equation by moving to another interaction picture with respect to
H_2 = g^2/2(1/Δ_1+1/Δ_2)a^† a
+∑_i=1^N{_2^2/4Δ_2gg+_1^2/4Δ_1ee}
=Δ_c^(s)a^† a+∑_i{Δ_g^(s)gg+Δ_e^(s)ee},
with
Δ_c^(s) ≡g^2/2(1/Δ_1+1/Δ_2),
Δ_g^(s) ≡_2^2/4Δ_2,
Δ_e^(s) ≡_1^2/4Δ_1,
resulting in
H'_eff = U_2H_effU_2^† - H_2
= ∑_i=1^N[g^2/2(1/Δ_2-1/Δ_1)a^† a(ee-gg).
.+g_1/2Δ_1(^-iδ'_1 taeg+).
+. g_2/2Δ_2(^-iδ'_2 tage+)],
where
δ'_1 ≡δ_1+Δ_c^(s)+Δ_g^(s)-Δ_e^(s),
δ'_2 ≡δ_2+Δ_c^(s)+Δ_e^(s)-Δ_g^(s).
By setting
δ'_1 = δ'_2 ≡δ,
_1/_2 = Δ_1/Δ_2,
(<ref>) becomes
H'_eff = χ a^† a S_z + g_eff(^-iδ ta+^iδ ta^†) S_x.
Here we have defined the following constants and operators:
χ ≡ g^2(1/Δ_1-1/Δ_2),
≡g_1/Δ_1 = g_2/Δ_2,
S_z ≡1/2∑_i=1^N (ee-gg),
S_x ≡1/2∑_i=1^N (eg+ge).
The second term in (<ref>) is the desired Mølmer-Sørensen interaction:
H_MS = g_eff(^-iδ ta+^iδ ta^†) S_x.
It is known that the integral of this Hamiltonian, denoted here U_MS(t), can be exactly calculated <cit.>.
U_MS(t) = ^-i(α(t)a^†+α^∗(t) a)S_x^iβ(t)S_x^2,
where
α(t) = i/δ(1-^iδ t),
β(t) = (/δ)^2(δ t-sinδ t) .
α(t) draws a circle with a radius of /δ in phase space as seen in <ref> and it returns to the origin after every τ = 2π/δ. Therefore at t = nτ (n=0, 1, 2,… ), α(t) = 0 and U_MS becomes a propagator involving only the atomic degrees of freedom:
U_MS(t = nτ) = ^iβ_nS_x^2
where β_n=2nπ(/δ)^2sign(δ) can be expressed by the area A enclosed by α(t) as β_n = 2n sign(δ)A, and hence is called geometric phase. By choosing the two photon detuning such that
δ = 2√(m) (m = 1, 2, 3, …),
the geometric phase becomes β_m = sign(δ)π/2 at t =mτ and
U_MS can be used to generate maximally entangled states <cit.>. We define the gate time
t_gate = mτ = πδ/2^2.
In particular when N=2, it accomplishes the following transformations of the two-qubit basis states, which are equivalent to the controlled-not gate up to single-qubit rotations:
|ϕ_1⟩≡|gg⟩ →|Φ_1^(±)⟩ = 1/√(2)(|gg⟩± i|ee⟩),
|ϕ_2⟩≡|ge⟩ →|Φ_2^(±)⟩ = 1/√(2)(|ge⟩± i|eg⟩),
|ϕ_3⟩≡|eg⟩ →|Φ_3^(±)⟩ = 1/√(2)(± i|ge⟩+|eg⟩),
|ϕ_4⟩≡|ee⟩ →|Φ_4^(±)⟩ = 1/√(2)(± i|gg⟩+|ee⟩).
Here the plus and minus signs correspond to the sign of δ.
So far we have neglected the first term in (<ref>) which represents the differential ac Stark shift induced by the cavity field:
H_AS = χ a^† a S_z.
This Hamiltonian can cause a deviation from the ideal time evolution of H_MS.
By setting Δ_1=Δ_2 in addition to the conditions (<ref>) and (<ref>), which in turn means _1 = _2, χ vanishes. However in general this additional condition may not be satisfied since it imposes a constraint _r_1-_g≈_r_2-_e on the energy structure if δ≪Δ_1, Δ_2. When this is not the case, the caused deviation may not be negligible depending on the magnitude of χ.
Another possible deviation could arise from dissipative processes such as the cavity field decay and atomic spontaneous emissions from the excited states.
In the following sections, we treat these two kinds of imperfections – the effect of χ and that of dissipative processes – in the case of two atoms.
§ EFFECT OF THE CAVITY-INDUCED AC STARK SHIFT
When χ≠ 0, inclusion of H_AS results in deviations from the ideal entangling gate operations (<ref>).
Let us denote the propagator of Hamiltonian (<ref>) by V(t). Then the wave function at time t is given by
|Ψ(t)⟩ = V(t)|Ψ(0)⟩,
where we assume that the initial state |Ψ(0)⟩ = |ϕ_i⟩|n⟩ (i = 1, 2, 3, 4), that is a tensor product of one of the logical basis states of two qubits (see (<ref>)) and a photon number state |n⟩ with an arbitrary n. The fidelity of the state |Ψ(t)⟩ with respect to the ideal gate output is
F_i,n(t) = ⟨Φ_i^(±)|tr_photon(Ψ(t)Ψ(t))|Φ_i^(±)⟩
= ⟨Φ_i^(±)|tr_photon(V(t)ϕ_iϕ_i⊗nnV^†(t))|Φ_i^(±)⟩.
Here tr_photon is a partial trace over the photon degree of freedom and |Φ_i^(±)⟩ is one of the atomic states for the ideal gate operation shown in (<ref>), chosen accordingly to the initial state |ϕ_i⟩. The subscripts i and n represents the initial atomic state and photon number respectively.
Since H_AS is proportional to the photon number operator a^† a, the perturbation caused by this Hamiltonian is expected to increase with the number of photons in the cavity. <ref>a shows time evolutions of F_1,n(t), i.e. state fidelity to |Φ_1^(+)⟩ when the initial state is prepared in |ϕ_1⟩|n⟩. Here we set δ = 2 > 0.
When χ = 0, the fidelity reaches the maximum value of unity at t = π (black dashed curve) and this behavior does not depend on the initial cavity state. However when χ≠ 0, the fidelity shows strong dependence on the number of photons in the initial cavity state.
In particular, we see that there is an acute fidelity drop for cavity initial states |n⟩ with n > 0. By preparing the initial cavity state in the vacuum state, the maximum fidelity remains close to unity even though the effect through dynamically generated photons is still present as a small drop of the fidelity and a time shift of the peak. Therefore in order to obtain the best gate performance for a given χ, the cavity state has to be prepared close to the vacuum state.
However, since thermal excitations are negligible in the optical domain at the room temperature (n∼ 10^-20), the cavity state remains essentially in vacuum unless we intentionally drive the cavity field. Hence this restriction to the initial cavity state causes little problem for cavity QED systems in the optical domain. This is in a stark contrast with ion traps where active cooling of phonon modes is always required.
In addition to the dependence on the initial cavity states, H_AS also leads to a dependence of the gate performance on the initial atomic states. <ref>b shows the fidelities for different atomic initial states to their target states. As can be seen, the initial states |ϕ_2⟩ and |ϕ_3⟩ are more sensitive to the perturbation than |ϕ_1⟩ and |ϕ_4⟩ when the cavity mode is prepared in vacuum.
These dependences of the gate performance on the initial photonic and atomic states when χ≠ 0 can be illustrated by explicitly considering an approximation of the state fidelity for small χ. In doing so, the main difficulty arises from the fact that H_MS and H_AS do not commute with each other. In <cit.>, the authors presented steady-state analysis of the same Hamiltonian (<ref>) (or equivalently (<ref>)). However, here we are interested in explicit time evolutions of states under the Hamiltonian. In Appendix <ref> we show that in a certain interaction picture (defined by the relation (<ref>)) the propagator can be perturbatively expanded in terms of a Hamiltonian H_II(t) as shown in (<ref>). For the initial state |Ψ(0)⟩ = |ϕ_i⟩|n⟩, we consider the time evolution of the state overlap with its target state |Φ_i⟩|n⟩ (In the following, we assume δ > 0 without loss of generality and omit (+) in Φ_i^(+)). That is
η_i,n(t) = ⟨Φ_i|⟨n|U_I^†(t)U_II^†(t)V_II(t)|ϕ_i⟩|n⟩.
The corresponding state fidelity is given by η_i,n(t)^2. Note that this fidelity is not exactly same as F_i,n(t) given in (<ref>) as we did not trace out the photonic degree of freedom in (<ref>). However as long as the additional excitation of photons due to H_AS [Even though H_AS∝ a^† a alone preserves the number of the cavity photons, due to the fact that H_AS and H_MS do not commute, the commutators between them arising in the propagator V(t) causes a change of the cavity photons in addition to that caused by the MS-gate process.] is small for small χ, η_i,n(t)^2 is a good approximation of F_i,n(t). According to (<ref>), the state overlap η_i,n(t) can be expanded as follows:
η_i,n(t) = η_i,n^(0)(t) + η_i,n^(1)(t) + η_i,n^(2)(t) + ,
with
η_i,n^(0)(t) = ⟨Φ_i|⟨n|U_I^†(t)U_II^†(t)|ϕ_i⟩|n⟩,
η_i,n^(1)(t) = -i⟨Φ_i|⟨n|U_I^†(t)U_II^†(t)∫_0^tH_II(t') dt'|ϕ_i⟩|n⟩,
η_i,n^(2)(t)
= (-i)^2 ⟨Φ_i|⟨n|U_I^†(t)U_II^†(t)∫_0^t∫_0^t'H_II(t')H_II(t”) dt'dt”|ϕ_i⟩|n⟩.
η_i,n^(k) is on the order of O(χ^k) and η_i,n^(0)(t) is the state overlap for the ideal MS-gate without the cavity induced ac-Stark shift. Using the energy eigenstates of H_MS' given in (<ref>)–(<ref>) (see also (<ref>) for the definition of the double-bracket state), the initial states |ϕ_i⟩|n⟩ can be written as
|ϕ_1,4⟩|n⟩ = 1/2∑_md_mn(-α)1, m
∓1/√(2)0, n
+ 1/2∑_md_mn(α)-1, m,
|ϕ_2,3⟩|n⟩ = 1/2∑_md_mn(-α)1, m
±1/√(2) |S=0, S_x=0⟩|n⟩
- 1/2∑_md_mn(α)-1, m,
where d_mn(±α) = ⟨m|D(±α)|n⟩ and the plus and minus signs in front of the second term in (<ref>) differentiate |ϕ_4⟩ and |ϕ_1⟩ respectively, and similarly |ϕ_2⟩ (plus) and |ϕ_3⟩ (minus) in (<ref>). Likewise the target states |Φ_i⟩|n⟩ are
|Φ_1,4⟩|n⟩ =^iπ/4/2∑_md_mn(-α)1, m
∓^-iπ/4/√(2)0, n
+ ^iπ/4/2∑_md_mn(α)-1, m,
|Φ_2,3⟩|n⟩ = ^iπ/4/2∑_md_mn(-α)1, m
±^-iπ/4/√(2) |S=0, S_x=0⟩|n⟩
- ^iπ/4/2∑_md_mn(α)-1, m.
In terms of these energy eigenstates, the ideal MS-gate without the cavity ac-Stark shift can be understood solely by evolution of quantum phases in the interaction picture: during the ideal MS gate operation, each term in the form of j, n (j = 0, ± 1) in (<ref>) and (<ref>) acquire a phase -δ(n-(jα)^2)t. Note that |S=0, S_x=0⟩|n⟩ does not change because j = n = 0. At a time t = t_gate, the part of this phase originating from photons (=-nδ t) becomes an integer multiple of 2π whereas the spin-dependent phase (= (jα)^2δ) produces the relative phase required for |Φ_i⟩|n⟩ (i = 1, 2, 3, 4) in (<ref>) and (<ref>) if the condition (<ref>) is satisfied. If the cavity ac-Stark shift is present, it disturbs this phase evolution by inducing transitions between j, n and j', n' where j-j' = 1 (see <ref>).
Using expansions such as (<ref>) and (<ref>), η_i,n^(k) of any order k can be explicitly calculated. It can be seen that η_i,n^(1) is zero for i = 2 and 3 for any n:
η_2,n^(1) = η_3,n^(1) = 0.
This is because both (<ref>) and (<ref>) do not contain 0, m for any m whereas H_II only consists of off-diagonal terms in the form of 0,m± 1,n and their conjugates (see (<ref>) – (<ref>)).
For i = 1 and 4, we get
η_1,n^(1) = nχ^iδ n/2√(2)∑_m
^-iE_1mt-^-iE_0nt/E_0n-E_1m
×(d_mn(α)^2+d_mn(-α)^2),
η_4,n^(1) = -η_1,n^(1).
It is clear that both η_1,n^(1) and η_4,n^(1) become zero if n =0. Therefore if the cavity state is prepared in the vacuum (n = 0), the first order term in (<ref>) is zero irrespective of the initial atomic states. This argument can be extended to higher odd orders and one can find that η_i, 0^(k) = 0 is satisfied for any i if k is an odd integer. If n 0, η_i, n^(k) is generally non-zero for i = 1 and 4. These observations endorse the relative resilience of the fidelity to the perturbation caused by H_AS in the case of n = 0 as previously illustrated in <ref>a.
Now we move on to the second order correction (<ref>) for the case that the cavity is initially prepared in the vacuum state.
By using (<ref>)– (<ref>) and (<ref>), we find
η_1, 0^(2) = -χ^2^-iπ/4/4∑_l,m,nl^2^-iE_1mt((-1)^m+n(1+(-1)^l))
× d_m0(α)d_n0(α)d_lm(α)d_ln(α)Y_lmn(t),
η_2, 0^(2) = -χ^2^-iπ/4/4∑_l,m,nl^2^-iE_1mt((-1)^m+n(1-(-1)^l))
× d_m0(α)d_n0(α)d_lm(α)d_ln(α)Y_lmn(t),
η_4, 0^(2) = η_1, 0^(2),
η_3, 0^(2) = η_2, 0^(2),
where
Y_lmn(t) = ∫_0^t∫_0^t'^-i(E_0l-E_1m)t'^i(E_0l-E_1m)t” dt'dt”
=1/E_1n-E_0l (^i(E_1m-E_1n)t-1/E_1m-E_1n + ^-i(E_0l-E_1m)t-1/E_0l-E_1m).
In order to assess the relative significance of η_i, 0^(2) (i = 1, 2, 3, 4) with respect to each other, we consider their functional dependences on α.
Since d_nm(α) = O(α^n-m), d_m0(α)d_n0(α)d_lm(α)d_ln(α) is O(α^m+n+m-l+n-l).
Hence, by considering possible combinations of l, m and n in (<ref>) and (<ref>), it can be found that at the lowest order in α,
η_1, 0^(2) = η_4, 0^(2) = O(α^4),
η_2, 0^(2) = η_3, 0^(2) = O(α^2).
The difference between (<ref>) and (<ref>) originates from the factors (1± (-1)^l) in (<ref>) and (<ref>). Since α = g_eff/δ < 1 (see (<ref>)), this difference indicates that the second order correction is smaller for the initial states |ϕ_1,4⟩|0⟩ than |ϕ_2,3⟩|0⟩ , which is also consistent with <ref>b.
Note that in <ref>b there is a small difference between the initial states |ϕ_1⟩|0⟩ (blue) and |ϕ_4⟩|0⟩ (red) whereas calculations based on (<ref>) and (<ref>) and even higher order terms in (<ref>) predict that they should exactly coincide. This is due to the partial trace of the photonic degree of freedom carried out in (<ref>) which is missing in the calculation of the state overlap (<ref>).
<ref> shows attainable maximal state fidelities as a function of χ for different initial states. For the blue and red solid lines we numerically calculated max(F_1,0(t)) and max(F_2,0(t)) respectively by solving the Shrödinger equations where the maximum is taken over time t for a given χ. On the other hand the dashed lines are approximated fidelities using (<ref>) up to the second order. That is, max(η^(0)_1,0(t)+η^(2)_1,0(t)^2) for the blue dashed line and max(η^(0)_2,0(t)+η^(2)_2,0(t)^2) for the red dashed line.
They reproduce the behaviors of the exact solutions, namely the distinction between different initial states, well up to χ∼ 0.2 with an error in fidelity less than 10^-3, confirming the validity of our perturbative approach.
As the state fidelities to the target states differ among different initial sates in the presence of H_AS, the performance of the gate as a whole is quantified by the fidelity averaged over all possible initial states, called average gate fidelity. According to <cit.>, the average gate fidelity is calculated using the following formula.
F̅(t) = ∑_i, jtr(U_MS(t)(σ_i⊗σ_j)U_MS^†(t)E(σ_i⊗σ_j))+16/80,
where σ_i,j is one of the Pauli matrices or identity matrix and the sum runs through all possible combinations of those. E is a map between atomic density operators to represent the time evolution governed by H_eff':
E(ρ) = tr_photon(V(t)(ρ⊗00)V^†(t)).
Taking the maximum of (<ref>) in terms of time t, <ref> shows the attainable average gate fidelities as a function of χ for different two-photon detunings δ. In the limit of δ/≫ 1, excitation of photons during the gate operation is increasingly suppressed as the radius of the trajectory (= g_eff/δ) in phase space reduces to zero. As a consequence, the perturbation by H_AS proportional to a^† a becomes negligible such that the average gate fidelity remains close to unity with a large value of δ (see the dash-dotted yellow trace in <ref>). However this general trend does not necessarily apply to relatively small values of δ due to their oscillatory nature as seen in <ref>. For example the average gate fidelity for δ/ = 4 is smaller than that for δ/ = 2 up to χ∼ 0.8 .
§ CAVITY AND ATOMIC DECAY
A deviation from the ideal gate operation could arise from dissipative processes such as the cavity decay and atomic spontaneous emissions. In this section, the effects of these processes are studied while assuming χ = 0 for brevity.
The optical fields in cavity QED systems suffer inevitable scattering/absorption or transmission losses at the cavity mirrors. In order to incorporate these losses, we use a master equation for the time evolution:
dρ/dt = -i[H'_eff, ρ] + κ(2aρ a^†-a^† aρ- ρ a^† a),
where ρ is the density operator for the total system including the atomic and photonic degrees of freedom and κ is the amplitude dissipation rate of the cavity field. Since the field dissipation does not depend on the atomic states, there is no difference between the average gate fidelity and the fidelity for a specific initial atomic state.
<ref> shows attainable maximum fidelities for κ 0. As can be seen, the detrimental effect of the field dissipation can be mitigated by increasing the two photon detuning δ with expense of increasing t_gate. This can be understood as follows: in the regime where the photonic excitation is very small, the probability P_κ that a photon loss occurs per unit time is proportional to the mean photon number ∼ O(^2/δ^2), which decreases quadratically with δ. On the other hand t_gate only scales linearly with δ (see (<ref>)). Therefore the probability that a photon loss occurs within the gate time is P_κ t_gate∼ O(1/δ) and hence increasing δ improves the achievable gate fidelity.
The detunings required for a gate fidelity ≥ 0.99 are δ/≈ 20, 100 and 900 for κ/ = 0.1, 1.0 and 10 respectively (see <ref>). Hence in principle the gate can be implemented with a high fidelity even in the regime where < κ. However, since large detunings δ means long gate times t_gate, increasing δ arbitrarily may not be an option if the system suffers from other decoherence mechanisms, such as the decay of atomic coherence, that cannot be mitigated by increasing δ.
By adiabatically eliminating the excited states, the atomic spontaneous emissions from the excited states |r_1⟩ and |r_2⟩ can be effectively modeled with Lindblad operators acting on |e⟩ and |g⟩ as in the following master equation <cit.>:
dρ/dt = -i[H'_eff, ρ] + ∑_i(L(C_1g^(i), ρ) + L(C_1e^(i), ρ).
. + L(C_2g^(i), ρ) + L(C_2e^(i), ρ)),
where
L(C, ρ) ≡ 2 Cρ C^†-C^† Cρ-ρ C^† C,
C_1g^(i) ≡√(γ_1g)(g/Δ_1agg + Ω_1/2Δ_1ge),
C_1e^(i) ≡√(γ_1e)(g/Δ_1aeg + Ω_1/2Δ_1ee),
C_2g^(i) ≡√(γ_2g)(g/Δ_2age + Ω_2/2Δ_2gg),
C_2e^(i) ≡√(γ_2e)(g/Δ_2aee + Ω_2/2Δ_2eg),
and γ_jξ (j = 1, 2, ξ = e, g) is the spontaneous decay rate associated with the |r_j⟩→|ξ⟩ transition. These decay rates and their branching ratios (e.g. γ_1g/γ_1e) differ to a large extent among different physical systems depending on their specific energy structures.
Here we first consider the simplest case where γ_1g = γ_1e = γ_2g = γ_2e (≡γ) holds. In addition we assume that Δ_1 = Δ_2 (≡Δ) and Ω_1 = Ω_2 (≡Ω) hence χ = 0 is satisfied. <ref> shows numerical calculations of the average gate fidelity using this setting for different values of Δ and δ.
If the contributions from the terms proportional to the photon annihilation operator a in the collapse operators (<ref>)- (<ref>) are negligible due to the small photon excitation, the effective spontaneous decay rate γ_eff is given by
γ_eff≈γΩ^2/4Δ^2.
Therefore the probability that a spontaneous emission occurs during the gate time is
P_spont∼γ_efft_gate = πγδ/8g^2 = π√(m)γΩ/4gΔ.
This probability needs to be sufficiently small in order for the atomic decay to be negligible for the gate fidelity. In that regard one can see that increasing δ is rather detrimental than beneficial, which can be also seen in <ref>.
In the general case where the cavity and atomic decays are both simultaneously present in the system, one needs to carefully choose experimental parameters depending on the relative magnitudes of κ and γ to minimize their total effect on the gate performance. As an example of such practical systems, we now consider single neutral ^87Rb atoms coupled to a high finesse optical cavity. Note that this particular system was theoretically and experimentally studied in <cit.> and <cit.> respectively. See Appendix <ref> for the details of modeling this system. First we assume a state-of-the-art conventional Fabry-Perot cavity with a length of 50 μ m and finesse of 10^6. This results in g/2π = 60 MHz and κ/2π = 1.5 MHz for a single atom.
With the other parameters listed in set 1 of Table <ref>, we obtain /2π = 123 kHz and χ/2π = -240 kHz, and the maximum average gate fidelity of F̅ = 0.844 at t = 260 μ s (see <ref> ).
In order to further improve the gate fidelity, one can use an ultra-high Q micro-sphere/toroidal cavity where a greater atom-cavity coupling is expected. Here we assume (g, κ)/2π = (200, 0.1) MHz <cit.> together with the parameters listed in set 2 of Table <ref>. Then we obtain /2π = 204 kHz, χ/2π = -331 kHz and F̅ = 0.986 at t = 98 μ s (<ref>).
§ CONCLUSION
In this article we have proposed a novel entangling quantum logic gate for general cavity QED systems. This scheme is inspired by the Mølmer-Sørensen gate in ion traps whose well-established physics enables us to have a clear picture of the basic gate dynamics and the effects of the imperfections. We have analyzed and evaluated possible adverse effects caused by the cavity-induced ac Stark shift, cavity field decay and atomic spontaneous emissions. Using the system as a practical example, we have demonstrated that a high-fidelity gate operation is possible with sufficiently large atom-cavity coupling. However we believe that our scheme is applicable to a broader class of physical systems, not limited to atomic cavity QED, due to the fact that cavity interactions are ubiquitous in many different physical systems. In particular, nitrogen-vacancy centers in diamond and semiconductor quantum dots may serve as good qubit candidates in solid state, due to the rapidly improving optical control of their electronic quantum states <cit.>.
In ion traps, there have been various proposals for extension of the MS-gate and entangling operations using a state-dependent force in general <cit.>. Using the framework that we presented in this article, potentially these schemes can be also translated to cavity QED and used to further improve the gate fidelity.
Furthermore, we would like to point out that the realization of the AJC Hamiltonian in cavity QED means that not only the MS-gate but many techniques developed in the context of ion trapping are transferable to optical cavity QED systems. Such examples can include engineered spin-spin interactions <cit.> and quantum state preparation of the bosonic modes <cit.>. Moreover, the open-system nature of cavity QED could add another interesting aspect to such realizations through the external driving and extraction of the optical field.
We gratefully acknowledge support from EPSRC through the UK Quantum Technology Hub: NQIT - Networked Quantum Information Technologies (EP/M013243/1 and
EP/J003670/1). H.T. and P.N.S thank Diego Porras for helpful discussions.
§ PERTURBATIVE EXPANSION OF THE PROPAGATOR IN AN INTERACTION PICTURE
Starting from Hamiltonian (<ref>) and moving on to an interaction picture defined by a unitary operator
U_I(t) = ^-iδ t a^†a,
we get a new system Hamiltonian
H_I = χ a^†a S_z + δ a a^† + g(a + a^†)S_x.
We define a new Hamiltonian
H_MS' = δ a^† a + g_eff(a + a^†)S_x
such that H _I = H_AS + H_MS'.
It can be easily shown that the following states form a complete set of the energy eigenstates of H_MS' for N = 2.
|S=1, S_x=1⟩|n, α⟩,
|S=1, S_x=0⟩|n⟩,
|S=1, S_x=-1⟩|n, -α⟩,
|S=0, S_x=0⟩|n⟩
where |n, α⟩ = D(α)|n⟩ and D(α) is the displacement operator with an amplitude α = -g/δ. Their energy eigenvalues are a function of S_x and n but not of S and are given by
E_jn = δ(n-(jα)^2),
for |S_x=j⟩|n, jα⟩ (j = 0, ± 1).
Now we further move on to a second interaction picture with
U_II(t) = ^iH'_MS t,
that leads to a new system Hamiltonian
H_II(t) = ^iH'_MSt H_AS^-iH'_MS't.
A wave function in this second interaction picture |Ψ_II(t)⟩ is related to the one in the original picture |Ψ(t)⟩ by
|Ψ_II(t)⟩ = U_II(t)U_I(t)|Ψ(t)⟩.
On the other hand, the time evolution of |Ψ_II(t)⟩ is described by a propagator V_II(t).
|Ψ_II(t)⟩ = V_II(t)|Ψ(0)⟩
V_II(t) = 1- i∫_0^tH_II(t') dt'
+(-i)^2∫_0^t∫_0^t'H_II(t')H_II(t”) dt'dt”+
where we have used a Dyson series expansion for V_II(t) as H_II(t) is not commutative at different times. Note also that the initial state |Ψ(0)⟩ is identical among all the pictures. From (<ref>) and (<ref>), we get
|Ψ(t)⟩ = U_I^†(t)U_II^†(t)V_II(t)|Ψ(0)⟩.
From (<ref>)
V(t) = U_I^†(t)U_II^†(t)V_II(t).
Since H_II(t)|S=0, S_x=0⟩ = 0 and [H_II(t), S^2] = 0, non-zero matrix elements of H_II(t) are limited in the S=1 manifold.
In other words |S=0, S_x=0⟩|n⟩ with an arbitrary photon number n is an eigenstate of the perturbative Hamiltonian H_AS with an eigenvalue of 0. Therefore this state is not affected by the perturbation and the gate works for this state even in the presence of H_AS.
From now on we only consider the matrix elements of H_II(t) in the S=1 manifold and we use the following abbreviation for the state notation:
j, n = |S=1, S_x=j⟩|n, jα⟩, j = -1, 0, 1.
The matrix elements of H_II(t) are calculated as follows:
1, m H_II(t) 1, n = 0,
-1, m H_II(t) 1, n = 0,
0, m H_II(t) 1, n = mχ/√(2)^i (E_0m-E_1n)t⟨m|D(α)|n⟩
=χΛ_mn(t;α),
0, m H_II(t) -1, n = mχ/√(2)^i (E_0m-E_1n)t⟨m|D(-α)|n⟩
=χΛ_mn(t;-α).
Here we have defined
Λ_mn(t;α) = m/√(2)^i (E_0m-E_1n)t⟨m|D(α)|n⟩.
With these matrix elements H_II(t) can be expressed as
H_II(t) = χ∑_m, n(Λ_mn(t;α)0,m1,n.
+ .Λ_mn(t;-α)0,m-1,n) +
Likewise
H_II(t)H_II(t')
= χ^2∑_l,m,n(Λ_ml(t;α)Λ_nl^∗(t';α)+Λ_ml(t;-α)Λ_nl^∗(t';-α))
×0, m0, n
+χ^2∑_j,j'=± 1∑_l,m,nΛ_lm(t;jα)Λ_ln^∗(t';j'α)j, mj', n.
Higher order integrands in (<ref>) can be as well calculated straightforwardly.
§ A MODEL FOR ATOMS COUPLED TO AN OPTICAL CAVITY
We employ the D_1 transition between 5^2S_1/2 and 5^2P_1/2 states of single atoms. Among the eight Zeeman sublevels in the 5^2S_1/2 manifold, we pick up |F=2, m_F=-2⟩ and |F=1, m_F=-1⟩ as our qubit states |g⟩ and |e⟩ respectively. These qubit states can be coupled to each other via the upper 5^2P_1/2 states. Due to the relatively small hyperfine splitting in the 5^2P_1/2 manifold (=812 MHz), we take not only |F'=1, m_F=-1⟩ (=|r_2⟩) but |F'=2, m_F=-1⟩ (=|r'_2⟩) into account as a mediating upper level for one of the cavity-assisted Raman transitions (see <ref>). The expressions for the effective parameters are obtained as follows <cit.>:
^(1) = gΩ_1/√(6)Δ_1,
^(2) = gΩ_2/2√(6)(1/Δ_2+1/Δ_2+ω'_12),
χ = g^2(1/4(Δ_2+ω'_12) + 1/12Δ_2-1/3Δ_1).
Here ω'_12/2π = 812 MHz and ^(1) and ^(2) are the effective coupling strengths of the Raman transitions corresponding to the JC and AJC terms respectively. Note that the above expressions are modified from (<ref>) and (<ref>) due to the Clebsch-Gordan coefficients and the off-resonant coupling to |r'_2⟩. The condition ^(1) = ^(2) imposes a constraint for Ω_1 and Ω_2.
Each upper state can decay to the ground qubit states at a rate proportional to the Clebsch-Gordan coefficient for the relevant transition. In addition they can also decay to other Zeeman sublevels in the 5^2S_1/2 manifold which are not shown in <ref>, effectively bringing the system out of the qubit subspace. In order to incorporate such decays into the model, we introduce a virtual auxiliary level |u⟩. For example the decay rates to |u⟩ from |r_1⟩ is equal to the total sum of all the decay rates from |r_1⟩ to the ground states except for the ones for |r_1⟩→|g⟩ and |r_1⟩→|e⟩. The same applies to the decays from |r_2⟩ and |r'_2⟩ to |u⟩. In this way the decays to |u⟩ embody all the decays to the outside of the qubit subspace. Note that in the real system it is possible for the atomic population outside the quibt subspace to be pumped back to the subspace again. Here we ignore such processes and the population in |u⟩ only accumulates in the simulation.
In the end we have nine different Lindblad operators per atom in the form of (<ref>) with the following collapse operators (here 2γ = 2π· 5.75 MHz):
C_1g^(i) = √(γ/3)(g/√(3)Δ_1agg + Ω_1/2√(2)Δ_1ge),
C_1e^(i) = √(γ/2)(g/√(3)Δ_1aeg + Ω_1/2√(2)Δ_1ee),
C_2g^(i) = √(γ/2)(g/2√(3)Δ_2age + Ω_2/2√(2)Δ_2gg),
C_2e^(i) = 1/2√(γ/3)(g/2√(3)Δ_2aee + Ω_2/2√(2)Δ_2eg),
C_2'g^(i) = √(γ/6)(g/2(Δ_2+ω'_12)age.
.+ Ω_2/2√(6)(Δ_2+ω'_12)gg),
C_2'e^(i) = √(γ)/2(g/2(Δ_2+ω'_12)aee.
.+ Ω_2/2√(6)(Δ_2+ω'_12)eg),
C_1u^(i) = √(5γ/6)(g/2√(3)Δ_1aug + Ω_1/2√(2)Δ_1ue),
C_2u^(i) = √(17γ/12)(g/2√(3)Δ_2aue + Ω_2/2√(2)Δ_2ug),
C_2'u^(i) = √(19γ/12)(g/2(Δ_2+ω'_12)aue.
.+ Ω_2/2√(6)(Δ_2+ω'_12)ug).
apsrev4-1
|
http://arxiv.org/abs/1701.07704v2 | 20170126135548 | First-principles insights into ultrashort laser spectroscopy of molecular nitrogen | [
"Mohammad Reza Jangrouei",
"S. Javad Hashemifar"
] | physics.atm-clus | [
"physics.atm-clus",
"physics.chem-ph",
"physics.optics"
] |
hashemifar@cc.iut.ac.ir
Department of Physics, Isfahan University of Technology,
84156-83111 Isfahan, Iran
In this research, we employ accurate time-dependent density functional calculations for
ultrashort laser spectroscopy of nitrogen molecule.
Laser pulses with different frequencies, intensities, and durations are applied
to the molecule and the resulting photoelectron spectra are analyzed.
It is argued that relative orientation of the molecule in the laser pulse
significantly influence the orbital character of the emitted photoelectrons.
Moreover, the duration of the laser pulse is also found to be very effective in controlling
the orbital resolution and intensity of photoelectrons.
Angular resolved distribution of photoelectrons are computed at different
pulse frequencies and recording times.
By exponential growth of the laser pulse intensity, the theoretical threshold of
two photons absorption in nitrogen molecule is determined.
First-principles insights into ultrashort laser spectroscopy of molecular nitrogen
S. Javad Hashemifar
December 30, 2023
==================================================================================
§ INTRODUCTION
The recent progress in the field of ultra-short laser pulses has provided
novel opportunities to capture fast dynamics of atoms and electrons in chemical reactions
and photo ionization phenomena <cit.>.
The studies of Ahmed Hassan Zewail, on the dynamic of chemical reactions
by using femtosecond spectroscopy,
awarded him the Nobel Prize of chemistry in 1999 <cit.>.
In 2001, Ferenc Krausz succeeded to generate attosecond laser pulses <cit.>,
which provide invaluable abilities to investigate and even control electron dynamics
in photo ionization phenomena <cit.>.
Haessler and coworkers used a train of attosecond laser pulses in presence of
a weak infrared field to ionize nitrogen molecule <cit.>.
They identified two ionization channels in the system correspond to
the ground state and excited state of the ionized molecule. Kelkensberg and others applied
the same method to ionize hydrogen molecule and observe changes in charge distribution of
the system on attosecond time scales <cit.>.
Siu . found that the time delay between attosecond pulse train and
a corresponding infrared field may be used to control the dissociative ionization
of oxygen molecule <cit.>.
Penka and others applied time dependent density functional theory in
the nonlinear nonperturbative regime to investigate laser induced photo ionization
in CO and H_2CO molecules <cit.>.
They found that the interplay between the ionization potential,
the orbital shape, and the laser polarization axis significantly
influence the ionization process.
In the present work, we employ time-dependent ab-initio calculations to study
photo ionization of N_2 molecule under irradiation of short laser pulses.
The effects of frequency and intensity of the pulse on the polarization will be investigated.
§ COMPUTATIONAL METHOD
Our calculations have been performed in the framework of time dependent Kohn-Sham (TDKS)
density functional theory <cit.>,
which provides a proper single-particle description of many-body
systems in the presence of time dependent external potentials (e.g. an electromagnetic pulse)
Adiabatic local density approximation (ALDA) is adapted for description of the time dependent
exchange correlation functional in this approach. It is already argued to be
the proper functional for description of atomic clusters under
intense electromagnetic fields <cit.>.
We used the Octopus package to solve the TDKS equations by employing
the norm-conserving pseudo potential technique <cit.>.
The KS orbitals are expanded on a real space grid defined inside geometrical boxes
around atoms or around whole system. Two general approaches are implemented in this package for
solving the TDKS equations: linear response and explicit real time propagation methods. In the
linear response regime, which is used to address the effects of a weak uniform white
electromagnetic noise, the absorption spectra and the character of electronic excitations
of the system are determined.
While in the presence of strong laser pulses, explicit propagation of KS
orbitals in real time domain is considered.
In order to calculate the emitted photoelectron spectra of a sample after strong laser irradiation,
a detector region is defined around the system and then the Wigner quasi-probability distribution
function in the phase space:
ω( R, p,t)=∫d s/2π^2e^i p· sρ( R+ s/2, R- s/2,t)
is used to integrate the photoelectrons in the detector region.
In the above equation, ρ( r, r',t) is a two body density matrix
and R and s are the center of mass and relative coordinates.
The momentum resolved photoelectron spectrum is then given by:
P( p)=lim_t→∞∫d R ω( R, p,t)
In this equation the integral is calculated in the detector region after
a sufficiently long time to ensure contribution of all photoelectrons.
In the Kohn-Sham approach, the two body density matrix is defined by the following
sum over occupied states:
ρ_ KS( r, r',t)=∑_i^occ.ψ_i( r,t)ψ_i( r',t)
This procedure needs calculation area of hundreds Angstrom to give reliable
photoelectron spectra. In order to reduce the required calculation area,
a mask region is defined before the detector region <cit.>.
§ RESULTS AND DISCUSSIONS
First, we performed some static DFT calculations to identify the equilibrium properties
of N_2 molecule. The equilibrium bond length and binding energy of nitrogen was found
to be about 1.09 Å and 8.89 eV, respectively,
which agree with the measured data (1.1 Å and 9.79 eV) <cit.>.
The energy gap between the highest occupied molecular orbital (HOMO) and
the lowest unoccupied molecular orbital (LUMO) was determined to
be about 8.2 eV (table <ref>).
Comparing this parameter with the experimental energy gap of N_2 is questionable,
because of the frozen character of orbitals in the static DFT calculations,
while in practice; electron excitation has non-trivial influences on orbital energy levels.
This problem is well resolved in time dependent DFT, where orbitals are allowed
to relax during electronic excitations.
The obtained absorption spectrums of N_2 by using TDDFT within the Casida
linear response and real time propagation approaches are presented in Fig. <ref>.
The agreement between these two spectra is acceptable, especially in lower energies.
In higher energies, the accuracy of the linear response approach decreases and
hence the real time propagation approach is more reliable.
The obtained energy gap within both Casida and real time approaches is about 11.0 eV.
In order to compare the obtained absorption spectra with experiment,
we note the complex absorption spectrum of nitrogen molecule <cit.>,
which includes weak dipole-forbidden transitions from 6 to 12.4 eV
and strong dipole-allowed transitions from 12.4 to 18.8 eV <cit.>.
The first 20 electronic excitations, identified within the Casida approach,
are listed in table <ref>. Our results confirm that the
peaks below 11 eV have negligible dipole moment and very weak strength while strong
dipole allowed peaks occur above this threshold. Moreover, we may conclude that the observed
experimental transitions below 11 eV are likely non-electronic excitation transition
(rotational or vibrational transitions).
The ionization energy of nitrogen molecule, the difference of the minimized
energy of the neutral and ionized molecule, was found to be 16.02 eV,
which compares well with the measured value of 15.80 eV <cit.>.
For calculations of photoelectron spectra, we used spherical boxes around the molecule with an
optimum internal radius of 12 Å for region A, an external radius of 22 Å for
detector region and a sine-mask function.
The optimum grid spacing in the atomic spheres was found to be 0.18 Å while
the time step for real time evolution was set to 1 mħ/eV (∼0.66 as).
We used Gaussian envelope laser pulses with a length of 4 ħ/eV (2.63 fs).
The frequency of the extreme ultraviolet (xuv) laser pulses was set
to some specific odd (9-17^ th) multiples of a fundamental frequency of 1.565 eV.
These odd harmonics have been already produced by propagating intense laser pulses in a
gas jet and then used for photo ionization of nitrogen molecule <cit.>.
The 12^ th multiple was also considered for more accurate inspection.
The calculated photoelectron spectra at the desired pulse frequencies and two different
geometries are presented in Fig. <ref>.
In these geometries, the molecule is either parallel or perpendicular to
the direction of the laser pulse propagation.
First, we focus on the perpendicular geometry.
It is seen that at the lowest pulse frequency (14.08 eV), two peaks are appeared
in the spectra, indicating emission of two different kinds of photoelectrons
from the system.
We will argue that these peaks are likely attributed to the two sigma molecular orbitals
of the nitrogen molecule.
Taking into account the calculated ionization energy of N_2 (16.02 eV),
it seems that a laser pulses with frequency of 14.08 eV should not be able
to create any photoelectrons.
Therefore, the observed very weak ionization is either due to the multi-photon absorption
or electron tunneling in strong laser field.
The low kinetic energy of photoelectrons (∼ 2 eV) rules out the multi-photon
absorption mechanism. The suppressed electrostatic potential of a nucleus in
a strong laser field is depicted in Fig. <ref>.
It is clearly seen that the laser field may reduce the electrostatic confinement
barrier of the nucleus and hence enhances electron tunneling.
Therefore tunneling ionization may happen at energies lower than
the normal ionization energy. The very low intensity of the photoelectron spectra
(Fig. <ref>) provides further evidence for the tunneling mechanism of ionization
in our system.
At higher laser frequencies, compared with 14.08 eV, photoelectrons gain
more kinetic energy and hence the spectra shifts to higher energies.
We observe that, in the perpendicular geometry, the two peaks occur at all frequencies,
although in some cases one of the peaks is weaker and hence appears
as a shoulder in the spectra.
On the other hand, in the parallel geometry, except for the first frequency,
in all other frequencies only one peak is visible in the spectra.
In order to understand this feature, we focus on the laser frequency of 18.78 eV
and identify the peak positions in the perpendicular and parallel geometries (Fig. <ref>).
In this regard, we have tried to deconvolute the photoelectron spectra in two
Gaussian functions. The center of the Gaussian peaks may be assigned to
the kinetic energy of the photoelectrons.
It is seen that (Fig. <ref>) the single peak of the parallel geometry
is decomposed into a major peak around 7.16 eV and a minor peak at 7.80 eV.
On the other hand, in the perpendicular geometry two peaks at 5.24 and 8.31 eV
are contributing to the spectra.
The major peak of the parallel geometry is located between the two peaks of
the perpendicular geometry. By subtracting the kinetic energy of photoelectrons
from the laser pulse energy, we may determine the binding energy of the emitted electrons.
Then, we consider the three highest occupied molecular orbitals of
N_2, one π molecular orbital located between two σ orbitals
(table <ref>, Fig. <ref>).
These orbital levels exhibit very good consistency with the obtained binding energy
of the photoelectrons.
This consistency enables us for a brief anatomy of the calculated photoelectron spectra.
The σ orbitals are mainly distributed along the molecular axis,
while in the case of π orbital, out of axis distribution may also play a significant role.
In the perpendicular geometry, the pulse electric field is along the molecular axis
and hence mainly σ and σ^* photoelectrons are emitted from the system.
In the parallel geometry, the pulse field is perpendicular to axis and hence mainly ionizes
the π orbital of the molecule, with a minor contribution from the σ orbital.
These arguments presumably explain occurrence of one (two) peaks in the photoelectron spectra
of the molecular nitrogen in the parallel (perpendicular) geometries.
The intensity of the photoelectron spectra exhibits different trends in the parallel and
perpendicular geometries (Fig. <ref>).
In the parallel geometry, we observe that by increasing the pulse frequency,
the photoelectron spectra intensity increases smoothly.
But in the perpendicular geometry, a more complicated trend is seen.
The photoelectron intensity decreases in the frequency range of 14.08 to 18.78 eV
and then increases from 20.34 to 26.60 eV.
In fact, theoretical description of strong field ionization has already shown that
tunneling rate is a complicated function of the laser pulse frequency
and intensity <cit.>.
In order to investigate feasibility of multi photon absorption in molecular nitrogen,
we considered laser pulses with frequency of 18.78 eV and five different intensities
from 10^13 to 10^18 W/cm^2.
The obtained results are presented in Fig. <ref>.
For the intensities lower than 10^17 W/cm^2, the general feature of the spectra
does not change by increasing the pulse intensity.
The spectra have two peaks and the photoelectron intensity is linearly scaled
by the pulse intensity.
At the pulse intensity of 10^17 W/cm^2, a third peak appears in the spectra
with a kinetic energy of about 25 eV.
Adding this value to the binding energy of σ orbital (10.47 eV) gives
a minimum required energy of about 35.5 eV for emission of the corresponding photoelectrons,
which is almost twice the energy of a single laser photon (18.78 eV).
Hence, we conclude that at this high intensity two photons absorption happens in the system.
At the pulse intensity of 10^18 W/cm^2, the intensity of the third peak
increases about two order of magnitudes compared with the previous one.
Hence, the two photons absorption intensity scales with the square of
the laser pulse intensity, in well agreement with theoretical description of
this nonlinear optical phenomenon <cit.>.
In the presence of high intensity laser pulses, the pondermotive energy
may also influence the results.
In the case of long pulses the pondermotive energy has a well
defined constant value ϵ^2/4ω^2 where
ϵ is the electric field amplitude <cit.>.
However, in the case of intense ultrashort pulses it is argued that
the pondermotive energy is not constant and calculation of this
parameter is not straightforward <cit.>.
Moreover, it is discussed that in this situation the Stark shift
may substantially compensate the pondermotive shift <cit.>.
Hence, we ignore consideration of non-constant pondermotive energy
in the current work.
The angular distribution of the photoelectrons resulted from the laser pulse
frequencies of 14.08, 20.34 and 26.60 eV are presented in Fig. <ref>.
Obviously, increasing frequency of the incident pulse enhances the kinetic energy
of the photoelectrons and hence speeds up its propagation away from the molecule.
The laser pulse is propagating perpendicular to the molecule and the electric field
polarization is parallel to the molecule.
Hence, we observe that photoelectrons are propagating in the molecule direction.
We have also investigated time evolution of the photoelectron distribution.
At the laser pulse frequency of 23.47 eV, the distribution is recorded after three
propagation times (3, 4, and 5 ħ/eV).
We observe that before arriving to the detector wall, the photoelectron density
has a smooth distribution, however after incident to the detector wall,
many fluctuations appear in the distribution density.
Throughout this project, we have mainly used ultrashort laser pulses with
a duration of 4 ħ/eV (∼ 2.6 fs) to mimic the real experimental situations.
In order to see the effect of pulse duration on the photoelectron spectra,
a laser pulse with a four times longer duration (∼ 10.5 fs) were considered in
our study (Fig. <ref>).
In general, the energy uncertainty principle, Δ E Δ t≤ħ,
applies that increasing the pulse duration should decrease the energy tolerance.
As it is obvious in the figure, the Fourier transform of the longer pulse
has much narrower dispersion around the central frequency of 18.78 V.
Therefore, the resulting photoelectron spectrum has more resolution in terms of
orbital character of the emitted photoelectrons.
We observe that the two peaks of the corresponding spectrum are significantly sharper,
compared with the photoelectron spectrum of the shorter pulse.
Moreover, frequency tolerance of the pulse exhibits nontrivial influence on
the intensity of the photoelectrons.
A laser pulse with sharper frequency distribution is clearly much more efficient
for electron emission from the sample.
§ CONCLUSIONS
Real time propagation of the single particle Kohn-Sham orbitals within
adiabatic local density approximation was applied to study photoelectron spectra
of nitrogen molecule in short laser pulses.
It was argued that when direction of the pulse propagation is perpendicular to the molecule,
σ photoelectrons are mainly emitted from the system,
while in the parallel geometry the highest occupied π orbital is more ionized.
It was seen that longer laser pulses, with lower frequency dispersion,
are more efficient for creation of well orbital resolved photoelectrons.
Angular resolved distributions were plotted to observe real space propagation of
photoelectrons at different pulse frequencies and propagation time.
It was argued that at 10^17 W/cm^2 pulse intensities and higher,
some new photoelectrons with much higher kinetic energy are emitted from the
molecule which indicate occurrence of two photons absorption phenomenon.
This work was supported by the Vice Chancellor of Isfahan University of Technology (IUT)
in research affairs.
SJH acknowledges the Abdus Salam International Center for Theoretical Physics
(ICTP) for supporting his summer (2016) visit to ICTP.
21
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Zewail(2000)]zewail2000
author author A. H. Zewail, @noop journal journal
Angewandte Chemie International Edition volume
39, pages 2586 (year 2000)NoStop
[Hentschel et al.(2001)Hentschel, Kienberger, Spielmann,
Reider, Milosevic, Brabec,
Corkum, Heinzmann, Drescher, and Krausz]hentschel2001
author author M. Hentschel, author R. Kienberger, author C. Spielmann, author G. A. Reider, author N. Milosevic,
author T. Brabec, author P. Corkum, author
U. Heinzmann, author
M. Drescher, and author
F. Krausz, @noop journal journal Nature volume 414, pages 509 (year 2001)NoStop
[Krausz and Ivanov(2009)]krausz2009
author author F. Krausz and author M. Ivanov, @noop journal journal
Reviews of Modern Physics volume 81, pages 163 (year 2009)NoStop
[Lépine et al.(2014)Lépine, Ivanov, and Vrakking]lepine2014
author author F. Lépine, author M. Y. Ivanov, and author M. J. Vrakking, @noop journal journal
Nature Photonics volume 8, pages 195
(year 2014)NoStop
[Pazourek et al.(2015)Pazourek, Nagele, and Burgdörfer]pazourek2015
author author R. Pazourek, author S. Nagele, and author J. Burgdörfer, @noop journal journal Reviews of Modern
Physics volume 87, pages 765
(year 2015)NoStop
[Haessler et al.(2009)Haessler, Fabre, Higuet, Caillat, Ruchon, Breger, Carré, Constant, Maquet, Mével et al.]haessler2009
author author S. Haessler, author B. Fabre,
author J. Higuet, author J. Caillat, author
T. Ruchon, author P. Breger, author B. Carré, author E. Constant, author A. Maquet, author E. Mével, et al., @noop journal journal Physical Review A volume 80, pages 011404 (year
2009)NoStop
[Kelkensberg et al.(2011)Kelkensberg, Siu, Pérez-Torres,
Morales, Gademann, Rouzee,
Johnsson, Lucchini, Calegari,
Sanz-Vicario et al.]kelkensberg2011
author author F. Kelkensberg, author W. Siu,
author J. Pérez-Torres,
author F. Morales, author G. Gademann, author
A. Rouzee, author P. Johnsson, author M. Lucchini, author F. Calegari, author J. L. Sanz-Vicario, et al., @noop journal journal Physical review letters volume 107, pages 043002 (year
2011)NoStop
[Siu et al.(2011)Siu,
Kelkensberg, Gademann, Rouzée, Johnsson, Dowek, Lucchini, Calegari, De Giovannini,
Rubio et al.]siu2011
author author W. Siu, author F. Kelkensberg,
author G. Gademann, author A. Rouzée, author
P. Johnsson, author
D. Dowek, author M. Lucchini, author F. Calegari, author U. De Giovannini, author A. Rubio, et al., @noop journal journal Physical Review A volume 84, pages 063412 (year
2011)NoStop
[Penka et al.(2014)Penka,
Couture-Bienvenue, and Bandrauk]penka2014
author author E. F. Penka, author E. Couture-Bienvenue, and author A. D. Bandrauk, @noop journal journal Physical Review A volume 89, pages 023414 (year 2014)NoStop
[Ullrich(2011)]ullrich2011
author author C. A. Ullrich, @noop title Time-dependent
density-functional theory: concepts and applications (publisher OUP Oxford, year 2011)NoStop
[Fratalocchi and Ruocco(2011)]fratalocchi2011
author author A. Fratalocchi and author G. Ruocco, @noop journal journal
Physical Review Letters volume 106, pages 105504 (year 2011)NoStop
[Castro et al.(2006)Castro,
Appel, Oliveira, Rozzi,
Andrade, Lorenzen, Marques,
Gross, and Rubio]castro2006
author author A. Castro, author H. Appel,
author M. Oliveira, author C. A. Rozzi, author
X. Andrade, author F. Lorenzen, author M. Marques, author E. Gross, and author A. Rubio, @noop journal journal phys. stat. sol.(b) volume 243, pages 2465 (year 2006)NoStop
[De Giovannini et al.(2012)De Giovannini, Varsano, Marques,
Appel, Gross, and Rubio]de2012
author author U. De Giovannini, author D. Varsano, author M. A. Marques, author H. Appel,
author E. K. Gross, and author A. Rubio, @noop
journal journal Physical Review A volume 85, pages 062515 (year
2012)NoStop
[Lide(2006)]lide2006
author author D. Lide, @noop title CRC handbook of chemistry
and physics: a ready-reference book of chemical and physical
data-/editor-in-chief, David R. Lide, edition 87th ed. (publisher Boca Raton, Fla: CRC, year
2006)NoStop
[Vieitez et al.(2008)Vieitez, Ivanov, Ubachs, Lewis, and de Lange]vieitez2008
author author M. Vieitez, author T. Ivanov,
author W. Ubachs, author B. Lewis, and author
C. de Lange, @noop journal journal Journal of Molecular Liquids volume 141, pages 110 (year
2008)NoStop
[Hudson(1971)]hudson1971
author author R. D. Hudson, @noop journal journal
Reviews of Geophysics volume 9, pages
305 (year 1971)NoStop
[Radzig and Smirnov(1986)]radzig1986
author author A. Radzig and author A. Smirnov, @noop journal journal
Applied Optics volume 25, pages 2763
(year 1986)NoStop
[Popruzhenko et al.(2008)Popruzhenko, Mur, Popov, and Bauer]popruzhenko2008
author author S. Popruzhenko, author V. Mur,
author V. Popov, and author D. Bauer, @noop
journal journal Physical Review Letters volume 101, pages 193003 (year 2008)NoStop
[Shen(1984)]shen1984
author author Y.-R. Shen, @noop journal journal New
York, Wiley-Interscience, 1984, 575 p. volume 1
(year 1984)NoStop
[Della Picca et al.(2016)Della Picca, Gramajo, Garibotti,
López, and Arbó]della2016
author author R. Della Picca, author A. A. Gramajo, author C. R. Garibotti, author S. D. López, and author D. Arbó, @noop journal journal
Physical Review A volume 93, pages
023419 (year 2016)NoStop
[Fushitani et al.(2016)Fushitani, Liu, Matsuda, Endo, Toida, Nagasono, Togashi, Yabashi, Ishikawa, Hikosaka et al.]fushitani2016
author author M. Fushitani, author C.-N. Liu,
author A. Matsuda, author T. Endo, author
Y. Toida, author M. Nagasono, author T. Togashi, author M. Yabashi, author T. Ishikawa, author Y. Hikosaka, et al., @noop journal journal Nature Photonics volume 10, pages 102 (year 2016)NoStop
|
http://arxiv.org/abs/1701.08099v1 | 20170127162300 | Cooper pairs in the Borromean nuclei $^6$He and $^{11}$Li using continuum single particle level density | [
"R. M. Id Betan"
] | nucl-th | [
"nucl-th"
] |
Instituto de Física Rosario (CONICET-UNR), Bv. 27 de Febrero 210 bis, S2000EZP Rosario. Argentina.
Facultad de Ciencias Exactas, Ingeniería y Agrimensura (UNR), Av. Pellegrini 250, S2000BTP Rosario. Argentina.
Instituto de Estudios Nucleares y Radiaciones Ionizantes (UNR), Riobamba y Berutti, S2000EKA Rosario. Argentina.
A Borromean nucleus is a bound three-body system which is pairwise unbound because none of the two-body subsystem interactions are strong enough to bind them in pairs. As a consequence, the single-particle spectrum of a neutron in the core of a Borromean nucleus is purely continuum, similarly to the spectrum of a free neutron, but two valence neutrons are bound up in such a core. Most of the usual approaches do not use the true continuum to solve the three-body problem but use a discrete basis, like for example, wave functions in a finite box. In this paper the proper continuum is used to solve the pairing Hamiltonian in the continuum spectrum of energy by using the single particle level density devoid of the free gas. It is shown that the density defined in this way modulates the pairing in the continuum. The partial-wave occupation probabilities for the Borromean nuclei ^6He and ^11Li are calculated as a function of the pairing strength. While at the threshold strength the (s_1/2)^2 and (p_3/2)^2 configurations are equally important in ^6He, the (s_1/2)^2 configuration is the main one in ^11Li. For very small strength the (s_1/2)^2 configuration becomes the dominant in both Borromean nuclei. At the physical strength, the calculated wave function amplitudes show a good agreement with other methods and experimental data which indicates that this simple model grasps the essence of the pairing in the continuum.
Continuum Pairing Single particle density Borromean nuclei
21.10.Ma 21.60.-n 21.10.Gv
§ INTRODUCTION
A Borromean nucleus <cit.> is a bound three-body system in which any pair subsystems are unbound. This is so because neither, the bare neutron-neutron interaction nor the core-neutron interaction are strong enough to keep any pair subsystem together. As a consequence, the single-particle spectrum is exclusively continuum. The ^6He and ^11Li have these characteristics, hence they are both Borromean nuclei, formed by a core plus two neutrons <cit.>. The properties of these nuclei have been studied in the two-body framework <cit.> as well as using the three-body framework <cit.> with effective interaction. More elaborate formalisms as the Faddeev equations <cit.> and ad initio calculations <cit.> has also been used to scrutinize the properties of these exotic nuclei.
The pairing is a fundamental part of the residual interaction <cit.>. It is particulary important in Borromean nuclei, since the core-neutron system is unbound while the same core wiht two neutrons is bound. Even when the origin of the pairing is unknown, at least two different models provide possible mechanisms for the enhancement of the pairing in the ^11Li nucleus. The first one is through the interplay between the pairing and collective vibration <cit.>. This interpretation is consistent <cit.> with the experimental cross section of Ref. <cit.>. The second mechanism is provided by the tensor force <cit.>, which explains the observed Coulomb breakup strength and the charge radius of Ref. <cit.>. The simplest pairing interaction is given by the constant pairing <cit.>. It will be shown that this effective interaction, even simple, in conjunction with the single particle density, captures the essence of the correlations of the two neutrons in the Borromean nuclei ^6He and ^11Li.
This paper studies the neutron-neutron pairing correlations in the core of the Borromean nuclei ^6He and ^11Li. Due to the Borromean character of these two nuclei, the correlations between continuum states are the only one present. Usually these correlations are incorporated through quasibound states obtained by putting the system in a large spherical box. In Refs. <cit.> the scattering wave functions are used in order to consider explicitly the continuum; in this work instead, the continuum spectrum of energy is handled using the continuum single particle level density (CSPLD). This density is defined as the difference between the mean-field and the free densities <cit.>. The used of the CSPLD was implemented earlier in many-body calculations in the Bardeen-Cooper-Schrieffer and Richardson solutions of the pairing interaction in Refs. <cit.>.
The paper is organized as follows. In section <ref> the partial wave probability amplitudes in terms of the CSPLD is derived. In section <ref> the single particle representation is defined, while the binding energy and partial wave amplitudes as a function of the strength are calculated in section <ref>. The conclusions are given in section <ref>. The paper contains an appendix (Appendix <ref>) which gives some details about the CSPLD which modulates the pairing interaction in the continuum.
§ FORMALISM
The goal of this section is to give the partial wave probability in terms of the partial wave single particle level density. We found it is clearer formulating the problem by starting with continuum discretized wave functions by putting the system in a spherical box (what we call box representation) <cit.>. After the equations have been obtained we make the formal limit of the size of the spherical box to infinity. We get a dispersion equation similar to that of Eq. (4) in the two-electrons system <cit.> which includes the continuum single particle level density.
Let us consider the Borromean nucleus as a three-body system formed by an inert core plus two valence neutrons. The Hamiltonian which governs the system reads,
H= h(1) + h(2) + V
where h is the single-particle Hamiltonian (see Eq. (<ref>) in sect. <ref>) and V is the residual interaction between the two valence neutrons. The discrete eigenvalues of h are labeled by {a,m_a}={ n_a,l_a,j_a,m_a} <cit.>
h(r̅) ψ_am_a(r̅)=ε_a ψ_am_a(r̅) ,
with ε_a>0 for all a.
The eigenfunctions of h are used as the single particle representation to build the antisymmetrized and normalized two-neutron bases |a,b;JM ⟩. This bases is used two expand the normalized two-neutron wave function |Ψ⟩_JM in term of the unknown amplitudes X^J_ab <cit.>
|Ψ⟩_JM = ∑_b≤ a X^J_ab | a,b;JM ⟩
with
∑_b≤ a (X^J_ab)^2 = 1
From the Schrödinger equation H |Ψ⟩_JM = E_J |Ψ⟩_JM we get the following eigenvalue equation for the three-body problem in the shell model framework
(E_J-ε_a-ε_b) X^J_ab
= ∑_d≤ c⟨ c,d;JM | V | a,b;JM ⟩ X^J_cd
We are going to consider as particle-particle effective interaction the constant pairing with matrix elements (m.e.) given by <cit.>
⟨ c,d;JM | V | a,b;JM ⟩
= -G/2√((2j_c+1)(2j_a+1))δ_J0δ_cdδ_ab
complemented with a partial wave cutoff l_max and an energy cutoff ε_max which will be specified in Section <ref>.
From the secular equation (<ref>) and the interaction m.e. (<ref>) we get the dispersion relation
1 = ∑_nlj(2j+1)/2G/2ε_nlj-E_0
which gives the J=0 correlated two-neutron energies. This expression shows that every state in the discretized continuum, no matter if it represents a resonant or a continuum state <cit.>, contributes with the same strength to the particle-particle correlation. This is a nonphysical feature since the expectation is that states in resonant configurations will have greater probability to interact with each other and with greater strength that states in continuum configurations. We will see below how this attribute of the constant pairing in the discretized continuum is modified in the continuum representation using single particle level density.
From the secular equation (<ref>) and the normalization condition Eq. (<ref>) we get the two-particle wave function amplitudes
X^J_ab = N δ_ab√(2j_a+1)/2ε_a - E_0
with N the normalization coefficient. Then, the two-particle ground state reads |Ψ⟩ = ∑_nlj X_nlj | nlj,nlj;00 ⟩ with
X_nlj= N √(2j+1)/2ε_nlj - E_0 .
We define the partial wave amplitude by summing up the contribution of all positive energy states for each partial wave (l,j)
X_lj = N ∑_n √(2j+1)/2ε_nlj - E_0 ,
where the coefficient N is fixed by the normalization condition ∑_lj X^2_lj=1, and E_0 is obtained by solving the dispersion relation (<ref>).
In the Appendix <ref> we show that what makes sense in the limit R→∞ of the size of the box, is not lim_R →∞ f(k_n) but lim_R →∞ [f(k_n)-f(k_n^(0))], i.e, the difference between the correlated and the uncorrelated magnitudes <cit.>. This is a practical way to get rid of the density due to the free nucleons. A subtraction prescription like this one was proposed by Bonche et al. <cit.> for the calculation of the contribution of unbound states in nuclear Hartree-Fock framework of finite temperature. The probability amplitude X_lj in the continuum representation reads,
X_lj = √(2j+1) N ∫_0^ε_maxg_lj(ε)/(2ε - E_0) dε
with
g_lj(ε) = 1/πd δ_lj/dε
and δ_lj(ε) the partial wave phase shift (Appendix <ref>).
The probability amplitude X_lj might be negative for some partial wave if g_lj(ε) takes negative values, but the partial probability X^2_lj=(X_lj)^2,
X^2_lj = (2j+1) N^2 [ ∫_0^ε_maxg_lj(ε)/(2ε - E_0) dε]^2 ,
is a positive magnitude. The value of N is defined such that the normalization ∑^l_max_lj X^2_lj=1 is fulfilled.
Notice that if we had defined instead, the partial wave probability as X^2_lj = ∑_n (X_nlj)^2, when the limit of the size of the box is taking to infinity, it could happen that X^2_lj∝∫g_lj(ε)/(2ε - E_0)^2 dε might be negative if g_lj(ε) takes mainly negative values in the interval (0,ε_max).
By taking the box limit of the Eq. (<ref>) we get the following dispersion relation
1 = ∑_lj ^l_max(2j+1)/2∫_0^ε_maxG g_lj(ε)/2ε-E_0 dε
This expression physically differs with (<ref>) not only because the limit of an infinite box has been taken, but mainly because the density without the free fermion gas is used (see Appendix and Ref. <cit.>). Now it is clear that resonant and non-resonant continuum states will not make the same contribution. In the applications it will be shown how the density affects the partial wave occupation probability. We will find that g_lj(ε) is intense around a resonance and small everywhere else. Then, the above expression could be interpreted in the way that the CSPLD modulates the pairing interaction in the continuum.
In terms of the CSPLD g(ε) we get a dispersion equation similar to that of Eq. (4) in the Cooper's system <cit.>,
2/G = ∫_0^ε_maxg(ε)/2ε-E_0 dε .
with (Appendix <ref>).
g(ε)=∑_lj^l_max (2j+1) g_lj(ε) .
Equations (<ref>) and (<ref>) give the energy and the probability occupation, respectively, for the two neutrons in Borromean nuclei interacting by a constant pairing force in the continuum representation through the partial wave CSPLD.
§ APPLICATIONS
§.§ Single particle representation
The Woods-Saxon (WS) mean field arranges the neutron 0p_1/2 state below the 1s_1/2 state; this order is experimentally found in the nucleus ^5He but not in ^10Li. The ground state of ^10Li is the state 1/2^+ which corresponds to the 1s_1/2 state in the shell model picture. In order to reproduce the experimental order in the nucleus ^10Li, it is usual to use parity-dependent parameters for the WS <cit.>. Alternatively, we add to the WS a deep Gaussian potential <cit.> which produces the same effect. The Gaussian parameters are chosen in a way that it strongly (mildly) affects the s(p) state. Doing so, the same mean-field can be use for odd and even states to describe the neutron states in ^10Li.
The single particle Hamiltonian which determines the representation through the eigenfunction of Eq. (<ref>) is
h(r̅) = - ħ^2/2μ∇^2_r̅
+ V_WS(r)
+ V_g(r)
+ V_so(r) (l̅·s̅) ,
where r̅ denotes the coordinate of the nucleon and μ the reduced mass of the core-neutron system. The central and spin-orbit potentials in terms of r=|r̅| are given by the following expressions
V_WS(r) = - V_0/1+exp(r-R/a)
V_so(r) = - V_so/ra2/ħ^2exp(r-R/a)/[1+exp(r-R/a)]^2
V_g(r) = - V_g e^-r^2/a^2_g
with R=r_0A^1/3.
The mean-field parameters in (<ref>)-(<ref>) are adjusted using the code Gamow <cit.> in order to reproduce as well as possible the low-lying levels of the nuclei ^5He and ^10Li. Table <ref> gives the values of these parameters.
In Table <ref> whe compare the calculated <cit.> and experimental energies. The real and imaginary complex energies of ^5He are very similar to the experimental resonant parameters. The ground state energy of ^10Li is real and negative but this nucleus is unbound. This is an antibound state <cit.> with wave number k_0=-i 0.033 fm^-1. An antibound state is an unbound single particle state with negative real energy, which would be bound if the mean field were a bit stronger <cit.>. The p_1/2 energy in ^10Li was fitted to the average of the two known experimental values.
The energy cutoff ε_max is defined by using the expression which relates its value with the effective range r_nn=2.75 fm <cit.> obtained for the three-dimensional delta interaction in ref. <cit.>
ε_max = ħ^2/m( 4/π r_nn)^2 ,
which gives 8.884 MeV using mc^2=939.57 MeV and ħ c=197.327 MeV fm. In our calculation we are going to used ε_max=9 MeV.
Using the mean-field which reproduces the low-lying levels of Table <ref>, we calculate the neutron partial wave phase shifts δ_lj(ε) as a function of the energy (up to the energy cutoff ε_max) using the code ANTI <cit.> and then, we calculate each partial density with Eq. (<ref>). The CSPLD g(ε) (<ref>), shown in Fig. <ref>, is calculated summing up the partial wave densities up to the angular momentum cutoff l_max=4. Partial wave with bigger angular momentum mildly modify the density for energies ε<ε_max.
The complex energy poles of the S-matrix manifest themselves on the real energy by modeling the shape of the CSPLD. We found two resonances below 9 MeV for the ^5He nucleus (see Table <ref>). While the p_3/2 resonance appears as a bump centered around 800 keV in Fig. <ref>, there is not signal of the p_1/2 resonance because of its large width (∼ 5 MeV). The finger print of the ground state 1/2^+ of ^10Li appears as a very sharp structure (labeled as s_1/2 in Fig. <ref>) in the density, very close to the continuum threshold. This is consistent with the properties of the scattering wave functions u_lj(k,r) at low energy <cit.> in the presence of bound or antibound state with energy ε∝ k_0^2 ≲ 0 close to the threshold
u_lj(kr) ∝√( 2 k | k_0 | /k^2+|k_0|^2) u_lj(|k_0|r) ,
in ^10Li, | k_0 |=0.033 fm^-1. The density has another sharp structure around 200 keV corresponding to the first excited state (1/2^- state). The last structure observed in the ^10Li spectrum is due to the 5/2^+ resonance around 4.4 MeV.
Notice that the scattering wave functions themselves are not used in the formulation but the CSPLD, i.e. the information of the structure of the continuum is coded in g(ε) as defined in Ecs. (<ref>) and (<ref>). The single particle representation is formed by N_He=174 and N_Li=289 discretized real energy states for the Helium and Lithium systems, respectively. These numbers of mesh points are chosen so as to make the results stable. The position of the mesh points are selected following the structure of the CSPLD, i.e. states around the resonant energies are favored. The narrower is the resonance, more mesh points are needed to smoothly describe the density; this explains why more states are used to describe the Lithium than Helium up to the same energy cutoff.
§.§ Results
The ground-state energy of the ^6He and ^11Li nuclei as function of the pairing strength G is calculated and shown in Fig. <ref>. They were obtained from Eq. (<ref>) using Gauss-Legendre quadrature for the integration. The experimental ground state energies E_Exp=-0.973 MeV <cit.> and E_Exp=-0.369 MeV <cit.> for ^6He and ^11Li respectively, are marked by dotted horizontal lines. Let us called physical strength G_ph the value of G for which the experimental energy is reproduced. For the ^6He system we get G_ph^He(9MeV)=1.427 MeV, while for ^11Li we get G_ph^Li(9MeV)=0.553 MeV.
Taking a different energy cutoff ε_max should only renormalized the pairing strength but not change the calculated properties of the system. In order to test this statement, we calculate the wave function amplitudes for a second model space with ε_max=18 MeV (using 90 extra mesh points to describe the density in the interval ε=(9,18) MeV). The evolution of the ground state energy for this second model space is also shown in Fig. <ref>.
For ^6He, the value of the physical strength for the second model space is G_ph^He(18MeV)=1.507 MeV. This figure is larger than for the smaller model space. The usual trend for the strength as a function of the energy cutoff is that the former decreases as the last one increases. The inversion in our model is due to the small negative values of the tail of ^5He density (see Fig. <ref>). The value of the physical strength for ^11Li is G_ph^Li(18MeV)=0.542 Mev. This figure is very similar to the one for the first model space (ε_max=9 MeV), indicating that the Lithium system is less sensitive to the energy cutoff than the Helium, probably due to the proximity of G_ph^Li to the continuum threshold.
The components l=0,1, and 2 of the occupation probabilities for the two different model spaces (energies cutoff) are given in Table <ref>. Since the results compare well with each other, we will adopt for purpose of comparison with other methods, the model space with ε_max=9 MeV obtained from Eq. (<ref>).
The probability occupation of the ground-state wave function of ^6He is compared with other methods in Table <ref>. The calculated (p_3/2)^2 contribution is not far from the one calculated using density-dependent contact interaction with box basis functions of Refs. <cit.>. Our best agreement is with the result obtained using contact-delta interaction in the basis of the continuum scattering s, p and d wave functions of Ref. <cit.>. Notice that the previous work <cit.> using only p wave gives much bigger value for the (p_3/2)^2 configuration. The simultaneous comparison of (p_3/2)^2 and (p_1/2)^2 configurations, shows a good agreement with Ref. <cit.> which uses Gaussian basis function in the complex scaling framework. A remarkable difference with all other methods is the big contribution of (s_1/2)^2 in our model.
It is experimentally well established that the two main configurations of the two neutrons in the Borromean nucleus ^11Li are (s_1/2)^2 and (p_1/2)^2. Ref. <cit.> shows that the contribution of the second configuration is (51± 6)%. In Table <ref> we compare our result with that of the cluster model of Ref. <cit.>, the denstiy-dependent contact interaction of Refs. <cit.> and that of the Random Phase Approximation (RPA) of Ref. <cit.>. In general we observe a good agreement with all these methods. In particular, the calculated X^2_s contribution is in between the result of the three-body and the collective models while the X^2_p seems to better agree with the result of Ref. <cit.>.
As the last study of the properties of these two Borromean nuclei, we analyze how change the ratio X^2_s-X^2_p as the pairing strength is artificially decreased. Figure <ref> shows the result for the ^6He nucleus. At the physical strength, the ground state wave function is dominated by the (p_3/2)^2 configuration. As the strength is decreased, the p contribution reduces is value at the time that the s increases. The system becomes unbound (changes to positive energy, see Fig. <ref>) for G_th≃ 0.55 MeV, called threshold strength (dashed vertical lines Fig. <ref>). At this value of the strength both configurations (s_1/2)^2 and (p_3/2)^2 are equally important. Figure <ref> shows the evolution of the two main components of the wave function in ^11Li. At the physical strength, both (s_1/2)^2 and (p_1/2)^2 configurations are sizable in ^11Li nucleus. As the strength is decreased, the s configuration becomes more and more important, being the dominant one at the threshold strength G_th≃ 0.005 MeV.
A common feature of Lithium and Helium nuclei is that small pairing strength favors the configuration (s)^2 in detriment of (p)^2 in both Borromean nuclei. While a difference between them is that, at the threshold strength the wave function of the Lithium is almost 100% (s_1/2)^2, while the two neutrons in the Helium share their strength between the (s_1/2)^2 and (p_3/2)^2 configurations. Figures <ref> and <ref> show that both Borromean nuclei ^6He and ^11Li are unbound until the pairing force creates enough correlations to unite them all three in a bound system. For the three-body n-n-^9Li this transition occurs very close to the continuum threshold, hence a very small correlation between the two neutrons in the ^9Li core is enough to bind the three-body system. This behavior of the two neutrons in the ^9Li core resembles very much the behavior of the electrons Cooper pair <cit.> with the difference, that in our finite system, the threshold strength is not zero. The small value of G_th may be due to the presence of the antibound state close to the threshold in the n-^9Li system. The antibound state may also affect other observables, like for example the dipole transition <cit.>.
§ CONCLUSIONS
The ground state energy and its wave function of the Borromean nuclei ^6He and ^11Li have been studied as a function of the pairing strength using the single particle level density. The model consist of a three-body system with two valence neutrons outside the inert cores ^4He and ^9Li. The neutrons lie in the continuum of their respectively mean fields and they are correlated through a constant pairing interaction modulated by the continuum single particle level density. The single particle representation was defined using the continuum single particle level density defined by the subtraction method <cit.>, while the cutoff energy was settled using the neutron-neutron effective range <cit.>. In order to compare with other formalism and experimental data, the pairing strength was fixed using the ground state energy of ^6He and ^11Li. It was found a good agreement with other methods for both nuclei ^6He and ^11Li. Finally, even when the (s_1/2)^2 configuration becomes dominant as the strength is artificially decreased in both Borromean systems, a seemingly apparent unique feature of the continuum s states in the ^11Li system appears due to the presence of the near-threshold antibound state, i.e. an extremely small (although finite) strength is enough to bind the two-neutron in the ^9Li core.
This simple model shows that the essence of the pairing in the continuum is captured through the continuum single particle level density.
This work was supported by the National Council of Research PIP-0625 (CONICET, Argentina).
§ PARTIAL WAVE SINGLE-PARTICLE LEVEL DENSITY
In this appendix we give details about the continuum single-particle level density (CSPLD) which is use in this work for the constant pairing interaction in the continuum energy representation. This density is derived from the box representation and it is expressed in terms of the derivative of the partial wave phase shift. We closely follow the consideration done by Beth and Uhlenbeck for the calculation of the second virial coefficient <cit.>.
A partial-wave scattering state is characterized by the angular momentum l, the total angular momentum j and the continuum wave number k. This generalized eigenfunction of the single-particle Hamiltonian with continuum eigenvalue ε=ħ^2/2 μ k^2 (where μ is the reduced mass) is characterized asymptotically by the phase shift δ_lj(k) <cit.>,
u_lj(k,r) →sin[ k r -l π/2 + δ_lj(k) ]
when r →∞.
This asymptotic behavior together with the condition that for a given partial-wave the phase shift tends to zero as k →∞, determine δ_lj(k) within a multiple of π. An increase of the orbital angular momentum makes the single-particle mean-field less important, and for this reason it makes sense to used an orbital angular momentum cutoff l_max.
One can discretized the continuum scattering states energy ε by putting the system into a large spherical box with radius R. Then, the box boundary condition, u_lj(k,R)=0 forces the continuous spectrum to have discrete values ε_nlj=ħ^2/2 μ k_nlj^2. The parameter n denotes the number of nodes (counting the ones at r=0) of the function u_nlj(r)=u_lj(k_nlj,r) in the interval [0,R). The relation between the number of nodes and the phase shift δ_lj can be obtained through the asymptotic expression (<ref>) and the boundary condition, given
k_nlj R - l π/2 + δ_lj(k_nlj) = n_ljπ
If for fixed {l,j} one orders the states ε_nlj according to the number of nodes of u_nlj, then n_lj gives the number of levels (without counting the degeneracy) between the bottom of the single particle potential and the energy ε_nlj <cit.>. In the limit of the box going to infinity the spectrum ε_nlj becomes continuous and a magnitude like ∑_n f(k_n) changes to <cit.>
lim_R→∞∑_n f_lj(k_nlj) =
∫ dk ( dn_lj/dk) f_lj(k)
with dn_lj/dk=lim_Δ k → 0Δ n_lj/Δ k. Where Δ n_lj = n_lj(k+Δ k)-n_lj(k) gives the contribution of all states for which k lies between k and k+Δ k. Using the expression (<ref>) we get
dn_lj/dk = 1/π[
R + d δ_lj/dk]
The summation in (<ref>) includes negative-energy bound states and positive-energy discretized continuum states. Single particle energies in the core of Borromean systems are exclusively positive. Then, in the limit R→∞ we would have
lim_R →∞∑_n , ε_nlj>0 f_lj(k_nlj) = ∫_0^∞
g_lj^(total)(ε)
f_lj(ε)
dε
where we introduce the total partial wave energy density
g_lj^(total)(ε)
= lim_R →∞[
√(μ/2π^2ħ^2ε) R
+ 1/πdδ_lj/dε]
The first term, which diverges with the size of the box corresponds to the density of the free nucleon. This can be seen by doing an analogous analysis when the nuclear mean field is zero. In such a case we would have in the passing to the limit,
lim_R →∞∑_n , ε^(0)_nlj>0 f_lj(k^(0)_nlj) = ∫_0^∞
g_lj^(free)(ε)
f_lj(ε)
dε
where ε_nlj^(0)=ħ^2/2 μ [k^(0)_nlj]^2 are the positive discrete eigenvalues (notice that the condition ε^(0)_nlj>0 is redundant for the free nucleons in the box) and
g_lj^(free)(ε) =
lim_R →∞√(μ/2π^2ħ^2ε) R
By taking advantage that g_lj^(total) and g_lj^(free) have both the same divergence as a function of the box radius, we used the following recipe in the limit of an infinite box (transition to the continuum) <cit.> for a fixed partial wave (lj),
lim_R →∞[
∑_n , ε_nlj>0 f_lj(k_nlj)
- ∑_n , ε^(0)_nlj>0 f_lj(k^(0)_nlj)
]
= ∫_0^∞ g_lj(ε) f_lj(ε) dε
where g_lj is the partial wave single particle level density with the free nucleons density subtracted
g_lj(ε) = 1/πd δ_lj/dε
i.e., the density so defined is the change in the density of single particle states at the energy ε due to the interaction <cit.>. With the usual convention lim_ϵ→∞δ_lj(ϵ)=0, the phase shift at zero energy is determined by the Levinson theorem as δ_lj(0)=n_ljπ <cit.>. The partial density g_lj(ε) may be either positive or negative depending on the sign of the derivative of the phase shift. For example if for a specific { lj } there are no resonance and n_lj 0, the phase shift will decrease monotonically from n_ljπ to zero <cit.> and the partial CSPLD will be negative for all values of the energies up to infinity. The draw-back of this “density” to be negative is compensated by the fact that it gives the structure of the continuum, i.e. for resonant partial wave g_lj(ε) is positive around the resonant energy and its amplitude much bigger than for non-resonant partial waves.
The continuum single particle level density (CSPLD) results from the sum of each partial wave CSPLD g_lj,
g(ε) = ∑_lj (2j+1) g_lj(ε)
10
url<#>1urlprefixURL href#1#2#2 #1#1
1993Zhukov
M. V. Zhukov, B. V. Danilin, D. V. Fedorov, I. J. Bang, J. M. Thompson, J. S.
Vaagen, Phys. Rep. 231 (1993) 151.
2004Jensen
A. S. Jensen, K. Riisager, D. V. Fedorov, E. Garrido, Rev. Mod. Phys. 76 (2004)
215.
2002Tilley
D. R. Tilley, C. M. Cheves, J. L. Godwin, G. M. Hale, H. M. Kofmann, K. J. H.,
C. G. Sheu, H. R. Weller, Nucl. Phys. A 708 (2002) 3.
2004Tilley
D. R. Tilley, K. J. H., J. L. Godwin, D. J. Millener, J. Purcell, C. G. Sheu,
H. R. Weller, Nucl. Phys. A 745 (2004) 155.
1987Hansen
P. G. Hansen, B. Jonson, Europhys. Lett. 4 (1987) 409.
1997Esbensen
H. Esbensen, G. F. Bertsch, K. Hencken, Phys. Rev. C 56 (1997) 3054.
2005Hagino
K. Hagino, H. Sagawa, Phys. Rev. C 72 (2005) 044321.
2014Fortunato
L. Fortunato, R. Chatterjee, J. Singh, A. Vitturi, Phys. Rev. C 90 (2014)
064301.
2014Myo
T. Myo, Y. Kikuchi, H. Masui, K. Katō, Progress in Particle and Nuclear
Physics 79 (2014) 1.
2016Singh
J. Singh, L. Fortunato, A. Vitturi, R. Chatterjee, Eur. Phys. J. A 52 (2016)
209.
1998Cobis
A. Cobis, D. V. Fedorov, A. S. Jensen, Phys. Rev. C 58 (1998) 1403.
2014Redondo
C. Romero-Redondo, S. Quaglioni, P. Navrátil, H. G., Phys. Rev. Lett. 113
(2014) 032503.
2012Bacca
S. Bacca, B. N., A. Schwenk, Phys. Rev. C 86 (2012) 034321.
2009Forssen
C. Forsseén, E. Caurier, P. Navrátil, Phys. Rev. C 79 (2009) 021303(R).
1964Lane
A. M. Lane, Nuclear Theory. Pairing Force Correlations and Collective Motion,
New York, Benjamin, 1964.
2003Dean
D. J. Dean, M. Hjorth-Jensen, Rev. Mod. Phys. 75 (2003) 607.
2001Barranco
F. Barranco, P. F. Bortignon, R. A. Broglia, E. Colò, G. Vigezzi, European
Physical Journal A 11 (2001) 385.
2010Potel
G. Potel, F. Barranco, E. Vigezzi, R. A. Broglia, Phys. Rev. Lett. 105 (2010)
172502.
2008Tanihata
I. Tanihata, M. Alcorta, D. Bandyopadhyay, et al., Phys. Rev. Lett. 100 (2008)
192502.
2007Myo
T. Myo, K. Katō, H. Toki, K. Ikeda, Phys. Rev. C 76 (2007) 024305.
2006Nakamura
T. Nakamura, A. M. Vinodkumar, T. Sugimoto, et al., Phys. Rev. Lett. 96 (2006)
252502.
1937Beth
E. Beth, G. Uhlenbeck, Physica 4 (1937) 915.
1998Kruppa
A. T. Kruppa, Physcis Letters B 431 (1998) 273.
2010Ono
T. Ono, Y. R. Shimizu, N. Tajima, S. Takahara, Phys. Rev. C 82 (2010) 034310.
2012npaIdBetan
R. M. Id Betan, Nuclear Physics A 879 (2012) 14.
2012prcIdBetan
R. Id Betan, Phys. Rev. C 85 (2012) 064309.
1980Maier
C. H. Maier, L. S. Cederbaum, W. Domcke, J. Phys. B 13 (1980) L119.
1956Cooper
L. N. Cooper, Phys. Rev. 104 (1956) 1189.
1980Lawson
R. D. Lawson, Theory of the nuclear shell model, Clarendon Press, Oxford, 1980.
2008IdBetan
R. M. Id Betan, G. G. Dussel, R. J. Liotta, Phys. Rev. C 78 (2008) 044325.
1984Bonche
P. Bonche, S. Levit, D. Vautherin, Nuclear Physics A 427 (1984) 278.
2005IdBetan
R. Id Betan, R. J. Liotta, N. Sandulescu, T. Vertse, R. Wyss, Phys. Rev. C 72
(2005) 054322.
1982Vertse
T. Vertse, K. F. Pal, Balogh, Computer Physics Communications 27 (1982) 309.
1994Thompson
I. J. Thompson, M. V. Zhukov, Phys. Rev. C 49 (1994) 1904.
1986Bohm
V. Bohm, Quantum Mechanics: Foundations and Applications, Springer-Verlag, New
York., 1986.
nndc
http://arxiv.org/abs/http://www.nndc.gov/chart
arXiv:http://www.nndc.gov/chart.
tunl
http://arxiv.org/abs/http://www.tunl.duke.edu/nucldata
arXiv:http://www.tunl.duke.edu/nucldata.
1989Slaus
I. Slaus, Y. Akaishi, H. Tanaka, Physics Reports 173 (1989) 257.
1995Ixaru
L. G. Ixaru, M. Rizea, V. T., Computer Physics Communications 85 (1995) 217.
1996Liotta
R. J. Liotta, E. Maglione, N. Sandulescu, T. Vertse, Physics Letters B 367
(1996) 1.
1972Migdal
A. B. Migdal, A. M. Perelomov, V. S. Popov, Sov. J. Nucl. Phys. 14 (1972) 488.
2012Wang
M. Wang, G. Audi, A. H. Wapstra, F. G. Kondev, M. MacCormick, X. Xu,
B. Pfeiffer, Chinese Physics C 36 (2012) 1603.
1997Aoi
N. Aoi, K. Yoneda, H. Miyatake, Y. Ogawa, H. Yamamoto, T. Ideguchi, E. Kishida,
et al., Nucl. Phys. A 616 (1997) 181.
2009Hagino
K. Hagino, H. Sagawa, T. Nakamura, S. Shimoura, Phys. Rev. C 80 (2009)
031301(R).
1968Schiff
L. I. Schiff, Quantum Mechanics, McGraw-Hill Book Company. New Yourk, 1968.
2002Carvalho
C. A. A. de Carvalho, H. M. Nussenzveig, Phys. Rep. 364 (2002) 83.
1982Newton
R. Newton, Scattering Theory of Waves and Particles, Springer, New York, 1982.
|
http://arxiv.org/abs/1701.07427v3 | 20170125185919 | Simplified dark matter models with two Higgs doublets: I. Pseudoscalar mediators | [
"Martin Bauer",
"Ulrich Haisch",
"Felix Kahlhoefer"
] | hep-ph | [
"hep-ph",
"hep-ex"
] |
=1
=1
#11]Martin Bauer,2,3]Ulrich Haisch4]and Felix Kahlhoefer
[1]Institut für Theoretische Physik, Universität Heidelberg,
Philosophenweg 16, 69120 Heidelberg, Germany[2]Rudolf Peierls Centre for Theoretical Physics,
University of Oxford,
OX1 3NP Oxford, United Kingdom[3]CERN, Theoretical Physics Department,
CH-1211 Geneva 23, Switzerland[4]DESY, Notkestraß e 85, D-22607 Hamburg, Germanybauer@thphys.uni-heidelberg.de, ulrich.haisch@physics.ox.ac.uk, felix.kahlhoefer@desy.deWe study a new class of renormalisable simplified models for dark matter searches at the LHC that are based on two Higgs doublet models with an additional pseudoscalar mediator. In contrast to the spin-0 simplified models employed in analyses of Run I data these models are self-consistent, unitary and bounds from Higgs physics typically pose no constraints. Predictions for various missing transverse energy (E_T, miss) searches are discussed and the reach of the 13 TeV LHC is explored. It is found that the proposed models provide a rich spectrum of complementary observables that lead to non-trivial constraints. We emphasise in this context the sensitivity of the tt̅ + E_T, miss, mono-Z and mono-Higgs channels, which yield stronger limits than mono-jet searches in large parts of the parameter space. Constraints from spin-0 resonance searches, electroweak precision measurements and flavour observables are also derived and shown to provide further important handles to constraint and to test the considered dark matter models.CERN-TH-2017-011, DESY-17-010Simplified dark matter models with two Higgs doublets: I. Pseudoscalar mediators
[
A. Mahmoodzadeha.mahmoodzadeh@iau-boukan.ac.ir, B. MalekolkalamiB.Malakolkalami@uok.ac.ir
==============================================================================================
§ INTRODUCTION
Simplified models of dark matter (DM) and a single mediator overcome many of the shortcomings of DM effective field theories, but remain general enough to represent a large class of popular theories of DM (see the reviews <cit.> for a complete list of references). In particular, including contributions from on-shell production of the mediators allows to capture the full kinematics of DM production at colliders, making meaningful comparisons with bounds from direct and indirect detection experiments possible.
In simplified DM models the interactions between the mediators and the Standard Model (SM) fermions are usually written as gauge or Yukawa couplings of mass dimension four. In many cases these interactions are however only apparently renormalisable, because in a full SU(2)_L × U(1)_Y gauge-invariant theory they in fact arise from higher-dimensional operators or they signal the presence of additional particles or couplings that are needed to restore gauge invariance <cit.>. These features can lead to parameter regions which are theoretically inaccessible or to misleading/unphysical predictions often related to unitarity violation. Models in which the mediators mix with the SM bosons avoid such inconsistencies. The existing LEP and LHC measurements of the Z-boson and Higgs-boson properties however severely restrict the corresponding mixing angles, and as a result classic E_T, miss searches like mono-jets are typically not the leading collider constraints in this class of simplified DM models <cit.>.
In this article, we study a new class of simplified DM models for spin-0 mediators based on two Higgs doublet models (THDMs), which are an essential ingredient of many well-motivated theories beyond the SM. In contrast to inert THDMs, where the DM particle is the lightest neutral component of the second Higgs doublet and is stabilised by an ad-hoc Z_2 symmetry <cit.>, our focus is on the case where the DM candidate is a SM singlet fermion. To couple the DM particle to the SM, we introduce a new spin-0 mediator, which mixes dominantly with the scalar or pseudoscalar partners of the SM Higgs. In this way constraints from Higgs signal strength measurements <cit.> can be satisfied and one obtains a framework in which all operators are gauge invariant and renormalisable.
In what follows we will explore the phenomenology of pseudoscalar mediators, while scalar portals will be discussed in detail in an accompanying paper <cit.> (see also <cit.>). Pseudoscalar mediators have the obvious advantage of avoiding constraints from DM direct-detection experiments, so that the observed DM relic abundance can be reproduced in large regions of parameter space and LHC searches are particularly relevant to test these models. Similar investigations of THDM plus pseudoscalar simplified DM models have been presented in <cit.>. Whenever indicated we will highlight the similarities and differences between these and our work.
The mono-X phenomenology of the considered simplified pseudoscalar models turns out to be surprisingly rich. We examine the constraints from searches for j+ E_T, miss <cit.>, tt̅/b b̅ + E_T, miss <cit.>, Z+E_T, miss <cit.>, h+E_T, miss <cit.> and W+E_T, miss <cit.> and present projections for the 13 TeV LHC. In particular, we provide benchmark scenarios that are consistent with bounds from electroweak (EW) precision, flavour and Higgs observables including invisible decays <cit.>. For the simplified pseudoscalar model recommended by the ATLAS/CMS DM Forum (DMF) <cit.> constraints from mono-jet searches dominate throughout the parameter space <cit.>, whereas for the model considered here tt̅ + E_T, miss, mono-Z and mono-Higgs searches yield competitive bounds and often provide the leading constraints. See Figure <ref> for an illustration of the various E_T, miss processes that are of most interest in our simplified model. This complementarity of different searches is the result of the consistent treatment of the scalar sector, inducing gauge and trilinear scalar couplings of the mediator beyond the ones present in the DMF pseudoscalar model.
It is particularly appealing that the Z+ E_T, miss and h + E_T, miss signatures are strongest in the theoretically best motivated region of parameter space, where the couplings of the light Higgs are SM-like. In this region of parameter space, couplings of the new scalar states to SM gauge bosons are strongly suppressed and play no role in the phenomenology, leading to gluon-fusion dominated production and a very predictive pattern of branching ratios. In consequence, a complementary search strategy can be advised, with the exciting possibility to observe DM simultaneously in a number of different channels, some of which are not limited by systematic errors and can be improved by statistics even beyond 300 fb^-1 of luminosity. The importance of di-top resonance searches <cit.> to probe neutral spin-0 states with masses above the t t̅ threshold is also stressed, and it is pointed out that for model realisations with a light scalar partner of the SM Higgs, di-tau resonance searches should provide relevant constraints in the near future. We finally comment on the impact of bottom-quark (b b̅) initiated production.
This paper is structured as follows. In Section <ref> we describe the class of simplified DM models that we will study throughout our work, while Section <ref> contains a comprehensive review of the non-E_T, miss constraints that have to be satisfied in order to make a given model realisation phenomenologically viable. The partial decay widths and the branching ratios of the spin-0 particles arising in the considered simplified DM models are studied in Section <ref>. The most important features of the resulting E_T, miss phenomenology are described in Section <ref>. In Section <ref> we finally present the numerical results of our analyses providing summary plots of the mono-X constraints for several benchmark scenarios. The result-oriented reader might want to skip directly to this section. Our conclusions and a brief outlook are given in Section <ref>.
§ THDM PLUS PSEUDOSCALAR EXTENSIONS
In this section we describe the structure of the simplified DM model with a pseudoscalar mediator. We start with the scalar potential and then consider the Yukawa sector. In both cases we will point out which are the new parameters corresponding to the interactions in question.
§.§ Scalar potential
The tree-level THDM scalar potential that we will consider throughout this paper is given by the following expression (see for example <cit.>)
V_H = μ_1 H_1^† H_1 + μ_2 H_2^† H_2 + ( μ_3 H_1^† H_2 + h.c. ) + λ_1 ( H_1^† H_1 )^2 + λ_2 ( H_2^† H_2 )^2
xx + λ_3 ( H_1^† H_1 ) ( H_2^† H_2 ) + λ_4 ( H_1^† H_2 ) ( H_2^† H_1 ) + [ λ_5 ( H_1^† H_2 )^2 + h.c. ] .
Here we have imposed a Z_2 symmetry under which H_1 → H_1 and H_2 → -H_2 to suppress flavour-changing neutral currents (FCNCs), but allowed for this discrete symmetry to be softly broken by the term μ_3 H_1^† H_2 + h.c. The vacuum expectation values (VEVs) of the Higgs doublets are given by ⟨ H_i ⟩ = (0,v_i/√(2))^T with v = √(v_1^2 + v_2^2)≃ 246 GeV and we define tanβ = v_2/v_1. To avoid possible issues with electric dipole moments, we assume that the mass-squared terms μ_j, the quartic couplings λ_k and the VEVs are all real and as a result the scalar potential as given in (<ref>) is CP conserving. The three physical neutral Higgses that emerge from V_H are in such a case both mass and CP eigenstates.
The most economic way to couple fermionic DM to the SM through pseudoscalar exchange is by mixing a CP-odd mediator P with the CP-odd Higgs that arises from (<ref>). This can be achieved by considering the following interaction terms
V_P = 1/2 m_P^2 P^2 + P ( i b_P H_1^† H_2 + h.c. ) + P^2 ( λ_P1 H_1^† H_1 + λ_P2 H_2^† H_2 ) ,
where m_P and b_P are parameters with dimensions of mass. We assume that V_P does not break CP and thus take b_P to be real in the following. In this case P does not develop a VEV and remains a pure CP eigenstate. Nevertheless, this term does lead to a soft breaking of the Z_2 symmetry. Notice that compared to <cit.> which include only the trilinear portal coupling b_P, we also allow for quartic portal interactions proportional to λ_P1 and λ_P2. A quartic self-coupling of the form P^4 has instead not been included in (<ref>), as it does not lead to any relevant effect in the observables studied in our paper.
The interactions in the scalar potential (<ref>) mix the neutral CP-even weak eigenstates and we denote the corresponding mixing angle by α. The portal coupling b_P appearing in (<ref>) instead mixes the two neutral CP-odd weak eigenstates with θ representing the associated mixing angle. The resulting CP-even mass eigenstates will be denoted by h and H, while in the CP-odd sector the states will be called A and a, where a denotes the extra degree of freedom not present in THDMs. The scalar spectrum also contains two charged mass eigenstates H^± of identical mass.
Diagonalising the mass-squared matrices of the scalar states leads to relations between the fundamental parameters entering V_H and V_P. These relations allow to trade the parameters m_P, μ_1, μ_2, μ_3, b_P, λ_1, λ_2, λ_4, λ_5 for sines and cosines of mixing angles, VEVs and the masses of the physical Higgses. This procedure ensures in addition that the scalar potential is positive definite and that the vacuum solution is an absolute minimum. In the broken EW phase the physics of (<ref>) and (<ref>) is hence fully captured by the angles α, β, θ, the EW VEV v, the quartic couplings λ_3, λ_P1, λ_P2 and the masses M_h, M_H, M_A, M_H^±, M_a. We will use these parameters as input in our analysis.
§.§ Yukawa sector
The couplings between the scalars and the SM fermions are restricted by the stringent experimental limits on flavour observables. A necessary and sufficient condition to avoid FCNCs associated to neutral Higgs tree-level exchange is that not more than one of the Higgs doublets couples to fermions of a given charge <cit.>. This so-called natural flavour conservation hypothesis is automatically enforced by the aforementioned Z_2 symmetry acting on the doublets, if the right-handed fermion singlets transform accordingly. The Yukawa couplings are explicitly given by
L_Y = - ∑_i=1,2 ( Q̅ Y_u^i H̃_i u_R + Q̅ Y_d^i H_i d_R + L̅ Y_ℓ^i H_i ℓ_R + h.c. ) .
Here Y_f^i are Yukawa matrices acting on the three fermion generations and we have suppressed flavour indices, Q and L are left-handed quark and lepton doublets, while u_R, d_R and ℓ_R are right-handed up-type quark, down-type quark and charged lepton singlets, respectively. Finally, H̃_i = ϵ H_i^∗ with ϵ denoting the two-dimensional antisymmetric tensor. The natural flavour conservation hypothesis can be satisfied by four discrete assignments, where by convention up-type quarks are always taken to couple to H_2:
Y_u^1 = Y_d^1 = Y_ℓ^1 =0 , (type I) ,
Y_u^1 = Y_d^2 = Y_ℓ^2 =0 , (type II) ,
Y_u^1 = Y_d^1 = Y_ℓ^2 =0 , (type III) ,
Y_u^1 = Y_d^2 = Y_ℓ^1 =0 , (type IV) .
The dependence of our results on the choice of the Yukawa sector will be discussed in some detail in the next section.
Taking DM to be a Dirac fermion χ a separate Z_2 symmetry under which χ→ -χ can be used to forbid a coupling of the form L̅H̃_1 χ_R + h.c. At the level of renormalisable operators this leaves
L_χ = - i y_χ P χ̅γ_5 χ ,
as the only possibility to couple the pseudoscalar mediator P to DM. In order to not violate CP we require the dark sector Yukawa coupling y_χ to be real. The parameter y_χ and the DM mass m_χ are further input parameters in our analysis.
§ ANATOMY OF THE PARAMETER SPACE
In this section we examine the anatomy of the parameter space of the model introduced above and discuss a number of important simplifications. We briefly explain the alignment/decoupling limit and describe the dependence of the predictions on the choice of Yukawa sector. The constraints on the mixing angles, quartic couplings and Higgs masses from spin-0 resonance searches, flavour physics, EW precision measurements, perturbativity and unitarity are also elucidated.
§.§ Alignment/decoupling limit
After EW symmetry breaking the kinetic terms of the Higgs fields H_i lead to interactions between the CP-even mass eigenstates and the massive EW gauge bosons. These interactions take the form
L⊃ ( sin (β - α ) h + cos (β - α ) H ) ( 2 M_W^2/v W_μ^+ W^- μ + M_Z^2/v Z_μ Z^μ ) .
In order to simplify the further analysis, we concentrate on the well-motivated alignment/decoupling limit of the THDM where α = β - π/2. In this case sin ( β - α ) = 1 meaning that the field h has SM-like EW gauge boson couplings. It can therefore be identified with the boson of mass M_h ≃ 125 GeV discovered at the LHC and the constraints from the Run I combination of the ATLAS and CMS measurements of the Higgs boson production and decay rates to SM final states <cit.> are readily fulfilled. Notice that in the alignment/decoupling limit the scalar H does not interact with W-boson or Z-boson pairs at tree level because in this limit one has cos ( β - α ) = 0.
§.§ Yukawa assignments
Working in the alignment/decoupling limit the fermion-scalar interactions most relevant for the further discussion are given by
L⊃ - y_t/√(2) t̅ [ h + ξ_f^ M H - i ξ_f^ M ( cosθ A - sinθ a ) γ_5 ] t
-∑_f=b,τy_f/√(2) f̅ [ h + ξ_f^ M H + i ξ_f^ M ( cosθ A - sinθ a ) γ_5 ] f
- y_t/√(2) V_tb ξ_t^ M H^+ t̅_R b_L + y_b/√(2) V_tb ξ_b^ M H^+ t̅_L b_R + h.c.
- i y_χ ( sinθ A + cosθ a ) χ̅γ_5 χ ,
where y_f = √(2) m_f/v denote the SM Yukawa couplings and V_ij are the elements of the Cabibbo-Kobayashi-Maskawa (CKM) matrix. The couplings ξ_f^ M encode the dependence on the choice of Yukawa sector (<ref>). In terms of tanβ one has
ξ_t^ I = ξ_b^ I = ξ_τ^ I = -β , (type I) ,
ξ_t^ II = -β , ξ_b^ II = ξ_τ^ II = tanβ , (type II) ,
ξ_t^ III = ξ_b^ III = -β , ξ_τ^ III = tanβ , (type III) ,
ξ_t^ IV = ξ_τ^ IV = -β , ξ_b^ IV = tanβ , (type IV) .
Since the production of the pseudoscalar mediator a as well as pp → h, H, A is driven by top-quark loops that enter the gluon-fusion (gg) channel at the LHC (see for instance <cit.> for a discussion in the context of E_T, miss searches) we will in the following focus on the region of small tanβ. In this limit the couplings of H,A,a to down-type quarks and charged leptons in (<ref>) are strongly Yukawa suppressed irrespectively of the chosen Yukawa assignment (<ref>). As a result existing bounds on the neutral scalar masses from flavour observables such as B_s →μ^+ μ^- that are known to receive tanβ enhanced corrections <cit.> are within experimental limits <cit.> even for a light scalar spectrum.
§.§ Di-tau searches
In order to understand whether the existing LHC searches for heavy neutral Higgses in fermionic final states such as f f̅ = τ^+ τ^-, b b̅ pose constraints on the low tanβ region of our simplified model, it is important to realise that while the pseudoscalars A and a couple both to DM, the heavy scalar H does not, as can be seen from (<ref>). If the channels A/a →χχ̅ are open, the discovery potential for H → f f̅ is therefore generically larger than that for the corresponding pseudoscalar modes. In fact, the constraints from pp → H → f f̅ are most stringent for model realisations with M_H < 2 m_t and M_a > max (M_H - M_Z, M_H/2 ), so that the decays H → tt̅, H → a a and H → a Z are kinematically forbidden and in consequence H is forced to decay to light SM fermions (see Section <ref>).
The typical restrictions that result from LHC searches for heavy scalars can be illustrated by considering M_H = 300 GeV and employing the 95% confidence level (CL) limit σ ( p p → H ) BR ( H →τ^+ τ^- ) < 0.4 pb <cit.> that is based on 13 fb^-1 of 13 TeV data. Using the next-to-next-to-next-to-leading order results <cit.> for inclusive H production in gluon fusion, we then find that the current di-tau searches only exclude a narrow sliver of parameters in the M_a–tanβ plane with 0.55 ≲tanβ≲ 0.65 and M_a ≳ 210 GeV in the case of a Yukawa sector of type II. A reduction of the quoted upper limit on the production cross section times branching ratio to 0.2 pb would however improve the range of excluded tanβ values to 0.3 ≲tanβ≲ 1.2. As we will see in Section <ref>, such a constraint would be very valuable because probing models with tanβ = O (1) and M_H ≃ M_a ≃ 300 GeV turns out to be difficult by other means.
§.§ Di-top searches
Heavy scalar and pseudoscalar bosons decaying dominantly into top-quark pairs can be searched for by studying the resulting t t̅ invariant mass spectra m_t t̅. In contrast to di-top searches for spin-1 or spin-2 states, a peak in the m_t t̅ distribution that one generically expects in the narrow-width approximation (NWA) is however not the only signature of a spin-0 resonance in this case. Indeed, the gg → H/A signal will interfere with the QCD t t̅ background which at the LHC is mainly generated by the gluon-fusion channel gg → t t̅. The signal-background interference will depend on the CP nature of the intermediate spin-0 boson, its mass and its total decay width. The observed interference pattern can be either constructive or destructive, leading to a rather complex signature with a peak-dip structure in the m_t t̅ spectrum <cit.>. The pp → H/A → t t̅ channel provides hence an interesting but challenging opportunity for hadron colliders to search for additional spin-0 bosons (see for instance <cit.> for recent phenomenological discussions).
The first LHC analysis that takes into account interference effects between the signal process gg → H/A → t t̅ and the SM background gg → t t̅ is the ATLAS search <cit.>. It is based on 20.3 fb^-1 of 8 TeV LHC data and considers the m_t t̅ spectrum in final states with a single charged lepton (electron or muon), large E_T, miss and at least four jets. The search results are interpreted in the context of a pure THDM of type II for two different mass points and employ the alignment/decoupling limit, i.e. sin ( β - α ) = 1. For a neutral scalar H (pseudoscalar A) with a mass of 500 GeV, the ATLAS analysis excludes the parameter values tanβ < 0.45 (tanβ < 0.85) at the 95% CL, while for the 750 GeV mass point no meaningful constraint on tanβ can be set. Recasting these limits into bounds on the parameter space of spin-0 simplified DM models is straightforward <cit.> and we will analyse the resulting restrictions on our model in Section <ref>.
§.§ Flavour constraints
Indirect constraints on the charged Higgs-boson mass M_H^± arise from Z → b b̅ <cit.>, B → X_s γ <cit.> and B_q–B̅_q mixing <cit.> since the latter processes receive corrections from the H^+ t̅_R b_L + h.c. and H^+ t̅_L b_R + h.c. terms in (<ref>). We find that B → X_s γ provides the strongest indirect constraint on M_H^± for small tanβ values in models of type I and III at present, while B_s–B̅_s oscillations represent the leading indirect constraint in the other two cases. For M_H^± = 750 GeV we obtain the bound tanβ≳ 0.8 from a combination of B-meson physics observables irrespective of the choice of the Yukawa sector. A model-independent lower limit of tanβ≳ 0.3 can also be obtained from the requirement that the top-quark Yukawa coupling remains perturbative <cit.>. The latest LHC search limits on the charged Higgs mass in the pp → tbH^± (H^±→ tb) channel <cit.> are satisfied for tanβ≳ 0.2 if M_H^± = 750 GeV is assumed, and therefore provide no relevant constraint.
§.§ EW precision constraints
A scalar potential with two doublets such as the one introduced in (<ref>) leads to additional Higgs interactions compared to the SM, which can violate the custodial symmetry present in the SM Higgs sector. It can be shown <cit.> that the tree-level potential V_H is custodially invariant for M_A = M_H^± or M_H = M_H^±. Only in these two cases can H or A have a sizeable mass splitting from the rest of the Higgses without being in conflict with EW precision measurements, most importantly Δρ. Since the potential (<ref>) mixes the pseudoscalar degree of freedom in H_i with P, in the theory described by V_H + V_P there are however additional sources of custodial symmetry breaking compared to the case of the pure THDM. In the alignment/decoupling limit and taking M_A = M_H^±, we find that the extended scalar sector gives rise to the following one-loop correction
Δρ = 1/(4 π)^2M_H^±^2/v^2 [ 1 + f (M_H, M_a, M_H^± ) + f (M_a, M_H, M_H^± ) ] sin^2 θ ,
with
f (m_1, m_2, m_3 ) = m_1^4 ( m_2^2 - m_3^2 )/m_3^2 (m_1^2 - m_2^2 ) ( m_1^2 - m_3^2 )ln ( m_1^2/m_3^2 ) .
Notice that Δρ→ 0 in the limit sinθ→ 0 in which the two CP-odd weak eigenstates are also mass eigenstates or if the scalar mass spectrum is fully degenerate. In the alignment/decoupling limit with M_H = M_H^±, the custodial symmetry is instead not broken by V_H + V_P and as a result one has Δρ = 0 at the one-loop level.
From the above discussion it follows that only cases with M_A = M_H^± are subject to the constraints from the EW precision measurements, while scenarios with M_H = M_H^± are not. In order to derive the resulting constraints in the former case, we employ the 95% CL bound
Δρ∈ [-1.2, 2.4 ] · 10^-3 ,
which corresponds to the value extracted in <cit.> from a simultaneous determination of the Peskin-Takeuchi parameters S, T and U. The fact that (<ref>) is proportional to the product of mass differences M_H^± - M_H and M_H^± - M_a as well as sin^2 θ implies that the existing EW precision data allow to set stringent bounds on sinθ if the relevant mass splittings in the scalar sector are sizeable. Taking for instance M_H^± = 750 GeV and M_a = 65 GeV, we find that for M_H = 500 GeV (M_H = 300 GeV) the inequality sinθ < 0.35 (sinθ < 0.25) has to be satisfied in order to be compatible with (<ref>). We will see in Section <ref> that the restrictions on sinθ can have a visible impact on the decay pattern of the scalar H, which in turn affects the mono-Z phenomenology discussed in Section <ref>.
§.§ Perturbativity and unitarity
Perturbativity <cit.> and unitarity <cit.> also put restrictions on the scalar masses and the magnitudes and signs of the quartic couplings. In our numerical analysis we will restrict our attention to the parameter space that satisfies M_H, M_A, M_a ≤ M_H^± = O ( 1 TeV) and always keep λ_3, λ_P1 and λ_P2 of O (1) or below. For such input parameter choices all constraints discussed in this section are satisfied if tanβ is not too far below 1. We also only consider parameters for which the total decay widths of H and A are sufficiently small so that the NWA applies, i.e. Γ_i≲ M_i/3 for i=H,A. This requirement sets an upper limit on the mass of the charged Higgs boson that is often stronger than bounds from perturbativity.
§ PARTIAL DECAY WIDTHS AND BRANCHING RATIOS
This section is devoted to the discussion of the partial decay widths and the branching ratios of the spin-0 particles arising in the simplified DM model introduced in Section <ref>. For concreteness we will focus on the alignment/decoupling limit of the theory. We will furthermore pay special attention to the parameter space with a light DM particle, small values of tanβ and scalar spectra where the new pseudoscalar a and the scalar h are the lightest degrees of freedom.
§.§ Lighter pseudoscalar a
As a result of CP conservation the field a has no couplings of the form aW^+W^-, aZZ and ahh. In contrast the ahZ vertex is allowed by CP symmetry but vanishes in the alignment/decoupling limit. At tree level the pseudoscalar a can thus only decay into DM particles and SM fermions. The corresponding partial decay widths are given by
Γ ( a →χχ̅ ) = y_χ^2/8π M_a β_χ/acos^2 θ ,
Γ ( a → f f̅ ) = N_c^f ( ξ_f^ M)^2/8πm_f^2/v^2 M_a β_f/asin^2 θ ,
where β_i/a = √(1 - τ_i/a) is the velocity of the particle i in the rest frame of the final-state pair and we have defined τ_i/a=4 m_i^2/M_a^2. Furthermore N_c^f = 3 (1) denotes the relevant colour factor for quarks (leptons) and the explicit expressions for the couplings ξ_f^ M can be found in (<ref>). At the loop level the pseudoscalar a can also decay to gauge bosons. The largest partial decay width is the one to gluon pairs. It takes the form
Γ (a → gg ) = α_s^2/32 π^3 v^2 M_a^3 | ∑_q=t,b,cξ_q^ M f(τ_q/a) |^2 sin^2 θ ,
with
f (τ) = τarctan^2 ( 1/√(τ -1 ) ) .
For small tanβ and non-zero values of sinθ the couplings of a to DM and top quarks dominate over all other couplings. As a result, the decay pattern of a is in general very simple. This is illustrated in the panels of Figure <ref> for two different choices of parameter sets. The left panel shows the branching ratio of a for a very light DM particle with m_χ = 1 GeV. One observes that below the t t̅ threshold one has BR (a →χχ̅) = 100 % while for M_a > 2 m_t both decays to DM and top-quarks pairs are relevant. In fact, sufficiently far above the t t̅ threshold one obtains BR (a →χχ̅)/ BR (a → t t̅) ≃ 0.7 y_χ^2 tan^2 β/tan^2 θ independent of the specific realisation of the Yukawa sector. In the right panel we present our results for a DM state of m_χ = 100 GeV. In this case we see that below the χχ̅ threshold the pseudoscalar a decays dominantly into bottom-quark pairs but that also the branching ratios to taus and gluons exceed the percent level. Compared to the left plot one also observes that in the right plot the ratio BR (a →χχ̅)/ BR (a → t t̅) is significantly larger for M_a > 2 m_t due to the different choice of sinθ.
§.§ Lighter scalar h
For sufficiently heavy pseudoscalars a the decay pattern of h resembles that of the SM Higgs boson in the alignment/decoupling limit. For M_a < M_h/2 on the other hand decays to two on-shell a mediators are possible. The corresponding partial decay width reads
Γ ( h → a a ) = 1/32 π g_haa^2 M_h β_a/h ,
with
g_haa = 1/M_h v [ ( M_h^2 - 2 M_H^2 + 4 M_H^±^2 - 2 M_a^2 - 2 λ_3 v^2 ) sin^2 θ
- 2 ( λ_P1cos^2 β + λ_P2sin^2 β ) v^2 cos^2 θ ] .
Notice that the haa coupling contains terms proportional to both sin^2 θ and cos^2 θ. These contributions result from the trilinear and quartic couplings in the scalar potential (<ref>), respectively. In our THDM plus pseudoscalar extension, h → aa decays are even possible in the limit θ→ 0, which is not the case in the simplified model considered in <cit.>.
Since the total decay width of the SM Higgs is only about 4 MeV, three-body decays of h into final states with a single a can also be relevant in the mass range M_h/2 < M_a ≲ M_h. Phenomenologically the most important three-body decay is the one where a is accompanied by a pair of DM particles but decays to an a and SM fermions are also possible. The corresponding partial decay widths are given by
Γ ( h → a χχ̅ ) = y_χ^2/32 π^3 g_haa^2 M_h β_χ/a g(τ_a/h) cos^2 θ ,
Γ ( h → a f f̅ ) = N_c^f ( ξ_f^ M)^2/32 π^3m_f^2/v^2 g_haa^2 M_h β_f/a g(τ_a/h) sin^2 θ ,
with <cit.>
g ( τ ) = 1/8 ( τ - 4 ) [ 4 - ln ( τ/4 ) ]
xx - 5 τ - 4/4 √(τ-1) [ arctan ( τ-2/2 √(τ-1) ) - arctan ( 1/√(τ-1) ) ] .
In Figure <ref> we show the branching ratios of the SM Higgs h for two different values of the DM mass. We observe that for a light pseudoscalar mediator a one has in both cases BR ( h → a χχ̅ ) = 100 %. In fact, the total decay width of the lighter scalar h exceeds 3 GeV for masses M_a ≲ 70 GeV. Such large values of Γ_h are in conflict with the model-independent upper limits on the total decay width of the Higgs as measured by both ATLAS and CMS in LHC Run I <cit.>. Notice that since the pseudoscalar a decays with 100% to DM pairs for the considered values of m_χ one has BR ( h → a χχ̅ ) = BR ( h → 2 χ 2 χ̅ ). This implies that for light DM the simplified model presented in Section <ref> is subject to the constraints arising from invisible decays of the Higgs boson <cit.>. We will analyse the resulting restrictions on the parameter space in Section <ref>. The right panel finally illustrates that in cases where m_χ is close to a quarter of the SM Higgs mass also decays such as h → a b b̅ with a →χχ̅ can have branching ratios of a few percent (or more) for a narrow range of M_a values. Notice that for the choice tanβ = 1 used in the figure the result for BR ( h → a b b̅ ) does not depend on the particular Yukawa assignment.
§.§ Heavier scalar H
In the alignment/decoupling limit of the pseudoscalar extensions of the THDM model the heavier scalar H does not couple to W^+ W^- and Z Z pairs. In addition the Hhh vertex vanishes. Under the assumption that M_H > M_a and taking A to be sufficiently heavy, the scalar H can hence decay only to SM fermions or the aa and aZ final state at tree level. The corresponding partial decay widths are
Γ ( H → f f̅ ) = N_c^f ( ξ_f^ M)^2/8πm_f^2/v^2 M_H β_f/H^3 ,
Γ ( H → a a ) = 1/32 π g_Haa^2 M_H β_a/H ,
Γ ( H → a Z ) = 1/16 πλ^3/2 (M_H, M_a, M_Z)/M_H^3 v^2 sin^2 θ ,
with
g_Haa = 1/M_H v [ ( 2 β) ( 2 M_h^2 - 4 M_H^2 + 4 M_H^±^2 - 2 λ_3 v^2 ) sin^2 θ
+ sin ( 2 β ) (λ_P1-λ_P2 ) v^2 cos^2 θ ] ,
denoting the Haa coupling. We have furthermore introduced
λ (m_1, m_2, m_3) = ( m_1^2 - m_2^2 - m_3^2 )^2 - 4 m_2^2 m_3^2 ,
which characterises the two-body phase space for three massive particles. Notice that the appearance of λ_P1 and λ_P2 in the partial decay width Γ ( H → a a ) indicates again a qualitative difference between the scalar interactions considered in <cit.> and the more general potential (<ref>). At the one-loop level the heavier scalar H can in addition decay to gluons and other gauge bosons, but the associated branching ratios are very suppressed and thus have no impact on our numerical results.
The dominant branching ratios of H as a function of M_a are displayed in Figure <ref> for two parameter sets. In the left panel the case of a scalar H with sinθ = 1/√(2) and M_H = 750 GeV is shown. One observes that for M_a ≲ 350 GeV the decay mode H → aZ has the largest branching ratio, while for heavier a the H → t t̅ channel represents the leading decay. Notice that for model realisations where the decay channel H → a Z dominates, interesting mono-Z signatures can be expected <cit.>. We will come back to this point in Section <ref>. The decay pattern of H is however strongly dependent on the mass of H since for M_H < M_H^± the mixing angle θ is constrained to be small by EW precision measurements (see Section <ref>). This behaviour is easy to understand from (<ref>) which in the limit of small sinθ, tanβ = O ( 1) and large M_H imply that Γ ( H → t t̅ ) ∝ m_t^2/(M_H tan^2 β), Γ ( H → a a ) ∝ v^4/M_H^3 ( λ_P1 - λ_P2 )^2 and Γ ( H → a Z ) ∝ M_H sin^2 θ. For M_H > 2 m_t the decay mode H → t t̅ can hence dominate over the whole M_a range of interest. This feature is illustrated on the right-hand side of the figure for sinθ = 0.35 and M_H = 500 GeV. One also sees from this panel that the branching ratio of H → aa can be relevant as it does not tend to zero in the sinθ→ 0 limit if the combination λ_P1 - λ_P2 of quartic couplings is non-zero. For tanβ≳ 2 and λ_P1 - λ_P2≳ 1, BR ( H → a a ) can even be the largest branching ratio for M_a < M_H/2. This happens because the terms proportional to sin^2 θ and cos^2 θ in (<ref>) both give a sizeable contribution to the Haa coupling, while the H t t̅ coupling is suppressed by 1/tan^2 β.
§.§ Heavier pseudoscalar A
For M_A > M_a and assuming that decays to H are kinematically inaccessible, the pseudoscalar A can only decay to DM, SM fermions and the ah final state at tree level. In the alignment/decoupling limit the corresponding partial decay widths take the form
Γ ( A →χχ̅ ) = y_χ^2/8π M_A β_χ/Asin^2 θ ,
Γ ( A → f f̅ ) = N_c^f ( ξ_f^ M)^2/8πm_f^2/v^2 M_A β_f/Acos^2 θ ,
Γ ( A → a h ) = 1/16 πλ^1/2 (M_A, M_a, M_h)/M_A g_Aah^2 ,
with
g_Aah = 1/M_A v [ M_h^2 - 2 M_H^2 - M_A^2 + 4 M_H^±^2 - M_a^2 - 2 λ_3 v^2
+ 2 ( λ_P1cos^2 β + λ_P2sin^2 β ) v^2 ] sinθcosθ ,
denoting the Aah coupling, and the analytic expression for the two-body phase-space function λ (m_1, m_2, m_3) can be found in (<ref>). Like in the case of H, loop-induced decays of the heavier pseudoscalar A can be neglected for all practical purposes.
In Figure <ref> we present our results for the branching ratios of the pseudoscalar A as a function of M_a for two different parameter choices. The left panel illustrates the case M_A = 750 GeV and one sees that for such an A the branching ratios are all above 10% and the hierarchy BR ( A → ah ) > BR ( A → t t̅ ) > BR ( A →χχ̅ ) is observed for M_a ≲ 200 GeV. As shown on the right-hand side of the figure, this hierarchy not only remains intact but is even more pronounced for a moderately heavy A until the threshold M_a = M_A - M_h is reached. For larger M_a values only decays to χχ̅ and t t̅ final states matter and the ratio of their branching ratios is approximately given by BR (A →χχ̅)/ BR (A → t t̅) ≃ 0.9 y_χ^2 tan^2 βtan^2 θ irrespective of the particular Yukawa assignment. Notice that a sizeable A → ah branching ratio is a generic prediction in the THDM plus pseudoscalar extensions with small tanβ, since the charged Higgs has to be quite heavy in this case in order to avoid the bounds from B → X_s γ and/or B_s-meson mixing. Since a →χχ̅ is typically the dominant decay mode of the lighter pseudoscalar a, appreciable mono-Higgs signals are hence a firm prediction in a certain region of parameter space of our simplified model. This point will be further explained in Section <ref>.
§.§ Charged scalar H^±
Since in the alignment/decoupling limit the H^+ h W^+ vertex vanishes, the partial decay widths of the charged scalar H^+ that are relevant in the small tanβ regime read
Γ ( H^+ → t b̅ ) = N_c^t |V_tb|^2 ( ξ_t^ M)^2/8πm_t^2/v^2 M_H^± ( 1 - m_t^2/ M_H^±^2 )^2 ,
Γ ( H^+ → H W^+ ) = 1/16πλ^3/2 (M_H^±, M_H, M_W)/M_H^±^3 v^2 ,
Γ ( H^+ → A W^+ ) = 1/16πλ^3/2 (M_H^±, M_A, M_W)/M_H^±^3 v^2 cos^2 θ ,
Γ ( H^+ → a W^+ ) = 1/16πλ^3/2 (M_H^±, M_a, M_W)/M_H^±^3 v^2 sin^2 θ ,
where in the case of H^+ → t b̅ we have neglected terms of O (m_b^2/M_H^±^2) in the expression for the partial decay width. Notice that in THDMs of type II and III also the decay H^+ →τ^+ ν_τ can be important if tanβ≫ 1. The result for Γ (H^+ →τ^+ ν_τ ) can be obtained from the expression given above for Γ ( H^+ → t b̅ ) by obvious replacements.
The main branching ratios of the charged Higgs H^+ are displayed in Figure <ref>. On the left-hand side of the figure the case of sinθ = 0.35 and M_H = 500 GeV is displayed and one observes that BR (H^+ → t b̅ ) > BR (H^+ → HW^+ ) > BR (H^+ → aW^+ ) for the shown values of M_a. Notice that for scenarios with M_H < M_H^± the hierarchy BR (H^+ → HW^+ ) > BR (H^+ → aW^+ ) is a rather model-independent prediction since in such cases EW precision measurements require sinθ to be small and Γ ( H^+ → a W^+ )/ Γ ( H^+ → H W^+ ) ∝sin^2 θ. The same is not true for the hierarchy between BR (H^+ → t b̅ ) and BR (H^+ → HW^+ ) which depends sensitively on the choice of tanβ since Γ ( H^+ → t b̅ )/ Γ ( H^+ → H W^+ ) ∝ 1/tan^2 β. It follows that for values of tanβ > 1 the H^+ → H W^+ channel can also be the dominant decay mode. In model realisations with M_A < M_H^± there are no constraints from Δρ on sinθ and in turn the H^+ → aW^+ branching ratio can dominate for sufficiently large mixing in the pseudoscalar sector. This feature is illustrated by the right panel in the figure using sinθ = 1/√(2) and M_A= 500 GeV. For this choice of input parameters we find that BR (H^+ → aW^+ ) > BR (H^+ → t b̅ ) for masses M_a ≲ 300 GeV. Since the pseudoscalar a predominantly decays via a →χχ̅ it follows that THDM plus pseudoscalar extensions with M_A < M_H^± can lead to a resonant mono-W signal. We will discuss the LHC prospects for the detection of such a E_T, miss signature in Section <ref>.
§ ANATOMY OF MONO-X SIGNATURES
In this section we will discuss the most important features of the mono-X phenomenology of the pseudoscalar extensions of the THDM. We examine the mono-jet, the t t̅ + E_T, miss, the mono-Z and the mono-Higgs signature. The b b̅ + E_T, miss and mono-W channel are also briefly considered. Our numerical analysis of the mono-X signals is postponed to Section <ref>.
§.§ Mono-jet channel
A first possibility to search for pseudoscalar interactions of the form (<ref>) consists in looking for a mono-jet signal, where the mediators that pair produce DM are radiated from heavy-quark loops <cit.>. Representative examples of the possible one-loop Feynman diagrams are shown in Figure <ref>.
For m_a > 2 m_χ and M_A≫ M_a only graphs involving the exchange of the light pseudoscalar a will contribute to the j + E_T, miss signal. As a result the normalised kinematic distributions of the mono-jet signal in the pseudoscalar extensions of the THDM are identical to those of the DMF pseudoscalar model. Working in the NWA and assuming that tanβ is small, the ratio of the fiducial cross sections in the two models is thus approximately given by the simple expression
σ (pp → j + E_T, miss )/σ (pp → j + E_T, miss )_ DMF≃ ( y_χsinθ/g_χ g_q tanβ )^2 .
Here g_χ (g_q) denotes the DM-mediator (universal quark-mediator) coupling in the corresponding DMF spin-0 simplified model. Notice that the above relation is largely independent of the choice of Yukawa sector as long as tanβ = O (1) since bottom-quark loops have only an effect of a few percent on the j + E_T, miss distributions (see for instance <cit.> for a related discussion in the context of Higgs physics). Using the approximation (<ref>) it is straightforward to recast existing mono-jet results on the DMF pseudoscalar model such as those given in <cit.> into the THDM plus pseudoscalar model space. The numerical results presented in the next section however do not employ any approximation since they are based on a calculation of the j + E_T, miss cross sections including both top-quark and bottom-quark loops as well as the exchange of both a and A mediators.
§.§ t t̅/b b̅ + E_T, miss channels
A second channel that is known to be a sensitive probe of top-philic pseudoscalars with large invisible decay widths is associated production of DM and t t̅ pairs <cit.>. Figure <ref> displays examples of tree-level diagrams that give rise to a t t̅ + E_T, miss signature in the pseudoscalar extensions of the THDM model.
In the case that A is again much heavier than a, the signal strength for t t̅ + E_T, miss in our simplified model can be obtained from the prediction in the DMF pseudoscalar scenario from a rescaling relation analogous to the one shown in (<ref>). Using such a simple recasting procedure we find that the most recent ATLAS <cit.> and CMS searches for t t̅+ E_T, miss <cit.> that are based on 13.2 fb^-1 and 2.2 fb^-1 of 13 TeV LHC data, respectively, only allow to set very weak bounds on tanβ. For instance for M_a = 100 GeV, y_χ =1 and m_χ = 1 GeV a lower limit of tanβ≳ 0.2 is obtained. The t t̅+ E_T, miss constraints on the parameter space of the pseudoscalar extensions of the THDM are however expected to improve notably at forthcoming LHC runs. The numerical results that will be presented in Section <ref> are based on the search strategy developed recently in <cit.> which employs a shape fit to the difference in pseudorapidity of the two charged leptons in the di-leptonic channel of t t̅+ E_T, miss.
Besides t t̅ + E_T, miss also b b̅ + E_T, miss production <cit.> has been advocated as a sensitive probe of spin-0 portal couplings to heavy quarks. Recasting the most recent 13 TeV LHC b b̅ + E_T, miss searches <cit.> by means of a simple rescaling similar to (<ref>) we find that no relevant bound on the parameter space of our simplified model can be derived unless the a b b̅ coupling is significantly enhanced. From (<ref>) we see that such an enhancement can only arise in THDMs of type II and IV, while it is not possible for the other Yukawa assignments. Since in the limit of large tanβ also direct searches for the light pseudoscalar a in final states containing bottom quarks or charged leptons are relevant (and naively even provide the leading constraints) we do not consider the b b̅ + E_T, miss channel in what follows, restricting our numerical analysis to the parameter space with small tanβ.
§.§ Mono-Z channel
A mono-X signal that is strongly suppressed in the case of the spin-0 DMF models <cit.> but will turn out to be relevant in our simplified DM scenario is the mono-Z channel <cit.>. A sample of one-loop diagrams that lead to such a signature are displayed in Figure <ref>. Notice that the left diagram in the figure allows for resonant Z + χχ̅ production through a HaZ vertex for a sufficiently heavy scalar H. Unlike the graph on the right-hand side it has no counterpart in the spin-0 DMF simplified models.
As first emphasised in <cit.> the appearance of the contribution with virtual H and a exchange not only enhances the mono-Z cross section compared to the spin-0 DMF models, but also leads to quite different kinematics in Z + χχ̅ production. In fact, for masses M_H > M_a + M_Z the predicted E_T , miss spectrum turns out to be peaked at
E_T, miss^ max≃λ^1/2 (M_H, M_a, M_Z)/2 M_H ,
where the two-body phase-space function λ(m_1, m_2, m_3) has been defined in (<ref>). Denoting the lower experimental requirement on E_T, miss in a given mono-Z search by E_T, miss^ cut the latter result can be used to derive a simple bound on M_H for which a significant fraction of the total cross section will pass the cut. We obtain the inequality
M_H ≳ M_a + √(M_Z^2+ ( E_T, miss^ cut)^2) .
Given that in the latest mono-Z analyses <cit.> selection cuts of E_T, miss^ cut≃ 100 GeV are imposed it follows that the scalar H has to have a mass of M_H ≃ 500 GeV if one wants to be sensitive to pseudoscalars a with masses up to the t t̅ threshold M_a ≃ 350 GeV.
Our detailed Monte Carlo (MC) simulations of the Z + E_T, miss signal in Section <ref> however reveals that the above kinematical argument alone is insufficient to understand the shape of the mono-Z exclusion in the M_a–tanβ plane in all instances. The reason for this is twofold. First, in cases where sinθ is small H → aZ is often not the dominant H decay mode and as a result the Z+ E_T, miss measurements lose already sensitivity for masses M_a below the bound implied by the estimate (<ref>). Second, Z + χχ̅ production in gg → a Z and gg → A Z is also possible through box diagrams, and the interference between triangle and box graphs turns out to be very relevant in models that have a light scalar H or pseudoscalar A with a mass below the t t̅ threshold. We add that for tanβ > O (10) also resonant mono-Z production via b b̅→ aZ and b b̅→ AZ can be relevant in models of type II and IV. In the context of the pure THDM such effects have been studied for instance in <cit.>.
§.§ Mono-Higgs channel
In certain regions of parameter space another possible smoking gun signature of the pseudoscalar extensions of the THDM turns out to be mono-Higgs production. As illustrated in Figure <ref> this signal can arise from two different types of one-loop diagrams. For M_A > M_a + M_h the triangle graph with an Aah vertex depicted on the left-hand side allows for resonant mono-Higgs production and thus dominates over the contribution of the box diagram displayed on the right. In consequence the mono-Higgs production cross sections in the THDM plus pseudoscalar extensions can exceed by far the small spin-0 DMF model rates for the h + E_T, miss signal <cit.>.
Like in the case of the mono-Z signal the presence of triangle diagrams with a trilinear scalar coupling also leads to a peak in the E_T, miss distribution of h + χχ̅ production if the intermediate heavy pseudoscalar A can be resonantly produced. The peak position in the mono-Higgs case is obtained from <cit.>
E_T, miss^ max≃λ^1/2 (M_A, M_a, M_h)/2 M_A .
It follows that in order for events to pass the E_T, miss cut necessary for a background suppression in mono-Higgs searches, the relation
M_A ≳ M_a + √(M_h^2 + ( E_T, miss^ cut)^2) ,
has to be fulfilled. A lesson to learn from (<ref>) is that mono-Higgs searches in the h → b b̅ channel <cit.> are less suited to constrain the parameter space of our simplified model than those that focus on h →γγ<cit.>, because the minimal E_T, miss requirements in the former analyses are always stricter than those in the latter. To give a relevant numerical example let us consider E_T, miss^ cut≃ 100 GeV, which represents a typical E_T, miss cut imposed in the most recent h + χχ̅ (h →γγ) searches. From (<ref>) one sees that in such a case mono-Higgs analyses are very sensitive to masses up to M_a ≃ 330 GeV for M_A ≃ 500 GeV.
Like in the mono-Z case the above kinematical argument however allows only for a qualitative understanding of the numerical results for the pp → h + χχ̅ (h →γγ) exclusions, since interference effects can be important in scenarios with a pseudoscalar A of mass M_A <2 m_t. Notice that if M_a > M_A + M_h the role of A and a is interchanged and the h + E_T, miss signal can receive large corrections from resonant a exchanges, as we will see explicitly in Section <ref>. Finally in type II and IV models resonant mono-Higgs production from b b̅ initial states can also be important if tanβ is sufficiently large.
§.§ Mono-W channel
The last E_T, miss signal that we consider is the mono-W channel <cit.>. Two representative Feynman graphs that lead to a resonant W + E_T, miss signature in the pseudoscalar extension of the THDM are shown in Figure <ref>. These diagrams describe the single production of a charged Higgs H^± via the annihilation of light quarks followed by H^±→ a W^± (a →χχ̅). One way to assess the prospects for detecting a mono-W signature consists in comparing the production cross sections of H^± to that of H and A. Using for instance tanβ = 1, we find σ ( pp → H^+ ) ≃ 1.0 fb for M_H^± = 500 GeV and σ ( pp → H^+ ) ≃ 0.2 fb for M_H^± = 750 GeV at the 13 TeV LHC. The corresponding cross sections in the case of the heavy neutral spin-0 resonances read σ ( pp → H ) ≃ 1.4 pb and σ ( pp → A ) ≃ 3.1 pb and σ ( pp → H ) ≃ 0.2 pb and σ ( pp → A ) ≃ 0.3 pb, respectively. These numbers strongly suggest that an observation of a mono-W signal is compared to that of a mono-Z or mono-Higgs signature much less probable. We thus do not consider the W + E_T, miss channel any further.
Let us finally add that besides a simple mono-W signature also Wt + E_T, miss and Wtb+E_T, miss signals can appear in the DM model introduced in Section <ref>. For the relevant charged Higgs production cross sections we find at 13 TeV the results σ ( g b̅→ H^+ t̅ ) ≃ 0.17 pb (σ ( g b̅→ H^+ t̅ ) ≃ 0.04 pb) and σ ( g g → H^+ t̅ b ) ≃ 0.10 pb (σ ( gg → H^+ t̅ b ) ≃ 0.02 pb) using tanβ =1 and M_H^± = 500 GeV (M_H^± = 750 GeV). Given the small H^± production cross section in gb and gg fusion, we expect that searches for a Wt + E_T, miss or a Wtb + E_T, miss signal will in practice provide no relevant constraint in the small tanβ regime.
§ NUMERICAL RESULTS
The numerical results of our mono-X analyses are presented in this section. After a brief description of the signal generation and the background estimates, we first study the impact of interference effects between the a and A contributions to the j + χχ̅ and t t̅ + χχ̅ channels. Then the constraints on the parameter space of the THDM plus pseudoscalar extensions are derived for several well-motivated benchmark scenarios. In the case of the mono-Z and mono-Higgs searches we also discuss the LHC Run II reach in some detail.
§.§ Signal generation
The starting point of our MC simulations is a UFO implementation <cit.> of the simplified model as described in Section <ref>. This implementation has been obtained by means of the FeynRules 2 <cit.> and NLOCT <cit.> packages. The generation of the j + E_T, miss, Z + E_T, miss (Z →ℓ^+ ℓ^-) and h + E_T, miss (h →γγ) signal samples is performed at leading order (LO) with MadGraph5_aMC@NLO <cit.> using PYTHIA 8.2<cit.> for showering and NNPDF2.3 <cit.> as parton distribution functions. The whole MC chain is steered with CheckMATE 2 <cit.> which itself employs FastJet <cit.> to reconstruct hadronic jets and Delphes 3 <cit.> as a fast-detector simulation. The results of the CheckMATE 2 analyses have been validated against MadAnalysis 5 <cit.>. The selection requirements imposed in our analyses resemble those used in the recent LHC mono-jet <cit.>, mono-Z <cit.> and mono-Higgs <cit.> search, respectively. For what concerns our t t̅ + E_T, miss (t →ℓ b ν) recast we rely on the results of the sensitivity study <cit.>. In this analysis the DM signal has been simulated at next-to-leading order (NLO) with MadGraph5_aMC@NLO and PYTHIA 8.2 using a FxFx NLO jet matching prescription <cit.> and the final-state top quarks have been decayed with MadSpin <cit.>.
§.§ Background estimates
For the j + E_T, miss, Z + E_T, miss (Z →ℓ^+ ℓ^-) and h + E_T, miss (h →γγ) recasts our background estimates rely on the background predictions obtained in the 13 TeV LHC analyses <cit.>, <cit.> and <cit.>, respectively. The given background numbers correspond to 3.2 fb^-1, 13.3 fb^-1, 2.3 fb^-1 and we extrapolate them to 40 fb^-1 of integrated luminosity to be able to assess the near-term reach of the different mono-X channels. Our extrapolations assume that while the relative systematic uncertainties remain the same, the relative statistical errors scale as 1/√(L) with luminosity L. Depending on the signal region the relative systematic uncertainties amount to around 4% to 9% in the case of the mono-jet search, about 7% for the mono-Z analysis and approximately 20% for the mono-Higgs channel.
Since the j + E_T, miss search is already systematics limited at 40 fb^-1 its constraining power will depend sensitively on the assumption about the future systematic uncertainty on the associated SM background. This should be kept in mind when comparing the different exclusions presented below, because a better understanding of the backgrounds can have a visible impact on the obtained results. Since the t t̅ + E_T, miss (t →ℓ b ν) search will still be statistically limited for 40 fb^-1, we base our forecast in this case on a data set of 300 fb^-1 assuming that the relevant SM background is known to 20%. In the mono-Z and mono-Higgs cases we will present below, besides 40 fb^-1 projections, results for 100 fb^-1 and 300 fb^-1 of data. From these results one can assess if the existing Z + E_T, miss and h + E_T, miss search strategies will at some point become systematics limited in LHC Run II.
§.§ Interference effects
Our simplified model contains two pseudoscalar mediators a and A that are admixtures of the neutral CP-odd weak eigenstates entering (<ref>) and (<ref>). In mono-jet production the two contributions interfere and the resulting LO matrix element takes the following schematic form
M ( p p → j + χχ̅ ) ∝1/m_χχ̅^2 -M_a^2 - i M_a Γ_a - 1/m_χχ̅^2 -M_A^2 - i M_A Γ_A ,
where m_χχ̅ denotes the invariant mass of the DM pair and Γ_a and Γ_A are the total decay widths of the two pseudoscalar mass eigenstates.
The same results hold for instance also in the case of the pp → t t̅ + χχ̅ amplitude. Notice that the contributions from virtual a and A exchange have opposite signs in (<ref>) resulting from the transformation from the weak to the mass eigenstate basis. Such a destructive interference of two contributions also appears in fermion scalar singlet models with Higgs mixing and has there shown to be phenomenologically relevant <cit.>.
The impact of interference effects on the predictions of the mono-jet and t t̅ + E_T, miss cross sections is illustrated in Figure <ref> for three different values of the mass of the pseudoscalar A. Both plots display partonic LO results at 13 TeV LHC energies. In the left panel the basic selection requirements E_T, miss > 250 GeV and |η_j| < 2.4 are imposed with η_j denoting the pseudorapidity of the jet, while in the right figure only the cut E_T, miss > 150 GeV is applied. Focusing first on the cross sections for M_A = 750 GeV (red curves), one observes that in this case interference effects do not play any role since the pseudoscalar A is too heavy and effectively decouples. One also sees that at M_a ≃ 350 GeV the cross sections of both mono-jet and t t̅ + E_T, miss production are enhanced due to t t̅ threshold effects. Notice furthermore that the enhancement is more pronounced for the j + E_T, miss signal because the top-quark loops develop an imaginary piece once the internal tops can go on-shell.
The results for M_A = 500 GeV (green curves) resemble closely those for M_A = 750 GeV until M_a ≃ M_A - M_h ≃ 375 GeV at which point one observes an enhancement of the rates compared to the case of very heavy A. This feature is a consequence of the fact that for M_a < M_A - M_h the A → ah channel is the dominant decay mode of A, as can be seen from the right plot in Figure <ref>. For larger masses of a the phase space of A → ah closes and in turn BR (A →χχ̅ ) increases. This leads to constructive interference between the two terms in (<ref>) until M_a ≃ M_A = 500 GeV where the interference becomes destructive. Notice furthermore that the same qualitative explanations apply to the case of M_A = 300 GeV (blue curves) where the constructive and destructive interference takes place at M_a ≃ M_A - M_h ≃ 175 GeV and M_a ≃ M_A = 300 GeV, respectively. Comparing the left and right panel of Figure <ref>, one finally sees that the observed interference pattern is at the qualitative level independent of the choice of sinθ.
§.§ Summary plots
Below we study four different benchmark scenarios that exemplify the rich E_T, miss phenomenology of the simplified DM model introduced in Section <ref>. Throughout our analysis we work in the alignment/decoupling limit, adopting the parameters M_H^± = 750 GeV, λ_3 = λ_P1 = λ_P2 = 0, y_χ = 1 and m_χ= 1 GeV and consider a Yukawa sector of type II. The shown results however also hold in the case of the other Yukawa sectors (<ref>) since for tanβ = O (1) effects of bottom-quark loops in mono-jet, mono-Z and mono-Higgs production amount to corrections of a few percent only. The model-dependent contributions from b b̅-initiated production also turn out to be small for such values of tanβ. The constraints on all benchmark scenarios will be presented in the M_a–tanβ plane, in which the parameter regions that are excluded at 95% CL by the various searches will be indicated.
§.§.§ Benchmark scenario 1: sinθ = 0.35, M_H = 500 GeV
In the first benchmark scenario we choose sinθ = 0.35, M_H = 500 GeV and M_A = 750 GeV, where the choice of sinθ guarantees that EW precision measurements are satisfied for all values of M_a that we consider (see Section <ref>). The upper left panel in Figure <ref> summarises the various 95% CL exclusions. One first observes that the constraint from invisible decays of the Higgs (pink region) excludes all shown values of tanβ for mediator masses of M_a ≲ 100 GeV. This constraint has been obtained by imposing the 95% CL limit BR ( h → invisible ) < 25% set by ATLAS <cit.>. Notice that in the THDM plus pseudoscalar extensions one has BR ( h → invisible ) ≃ BR ( h → 2 χ 2 χ̅ ) ≃ 100% for a DM mass of m_χ = 1 GeV largely independent of sinθ, M_H and M_A, and as a result the h → invisible constraint is roughly the same in all of our benchmark scenarios. One furthermore sees that taken together the existing limits from flavour physics (dotted black line) and di-top searches (dashed black curve) exclude the parameter region with tanβ≲ 0.8. Here the di-top constraint is obtained from the results <cit.> by rescaling the limit quoted by ATLAS using the t t̅ branching ratio of the heavy scalar mediator H (see Section <ref>).
Turning ones attention to the constraints that arise from DM searches, one observes that even with an integrated luminosity of 300 fb^-1, t t̅ + E_T, miss measurements (green region) should be able to exclude only a small part of the M_a–tanβ plane. For pseudoscalar masses M_a around the EW scale values of tanβ≲ 0.6 can be tested, while t t̅ + E_T, miss searches have essentially no sensitivity to the parameter region with M_a ≳ 2 m_t since the decay channel a → t t̅ opens up. The weakness of the t t̅ + E_T, miss constraint is expected (see (<ref>)) since the t t̅ + a production cross section is suppressed by sin^2 θ≃ 0.1 in our first benchmark. This suppression is also the reason for our finding that with 40 fb^-1 of 13 TeV data, mono-jet searches will not lead to any relevant restriction on tanβ, if one assumes that these near-future measurements are plagued by systematic uncertainties at the 5% level in the low-E_T, miss signal regions.
The hypothetical mono-Z search (blue region) based on 40 fb^-1 of data provides the strongest constraint for M_a ≲ 250 GeV, excluding tanβ values slightly above 2 for light mediators a. This strong bound is a result of the resonant enhancement of Z + χχ̅ production in our first benchmark scenario. Notice furthermore the sharp cut-off of the Z + E_T, miss exclusion at M_a ≃ 260 GeV. For larger pseudoscalar masses M_a one finds that BR (H → aZ ) ≲ 10% (see the right panel in Figure <ref>) and as a result mono-Z production through triangle graphs is strongly reduced. This explains why the Z + E_T, miss search looses sensitivity already before M_a ≃ 350 GeV as one would naively expect from (<ref>) for the E_T, miss^ cut = 120 GeV high-mass signal region requirement imposed in <cit.>. We finally see that with 40 fb^-1 of integrated luminosity mono-Higgs searches (orange region) can cover only a small part of the parameter space compared to mono-Z measurements.
§.§.§ Benchmark scenario 2: sinθ = 0.25, M_H = 300 GeV
In our second benchmark scenario, the sine of the mixing angle is sinθ =0.25 and the masses of H and A are taken to be M_H = 300 GeV and M_A = 750 GeV. The corresponding exclusion contours are depicted in the upper right panel of Figure <ref>. The constraints from h → invisible (pink region) and flavour physics (dotted black line) resemble the exclusions that apply in the first benchmark case. The recent ATLAS di-top search does instead not lead to a constraint since, on the one hand, t t̅ decays of the scalar H are kinematically forbidden, and on the other hand, the ATLAS sensitivity to very heavy pseudoscalars A is not sufficient to set a bound on tanβ.
Given the smallness of sinθ, we find that our hypothetical t t̅ + E_T, miss search only probes the parameter region with M_a ≲ 2m_t and tanβ≲ 0.4. Mono-jet measurements are expected to provide even weaker restrictions and in consequence we do not show the corresponding bounds. As in the case of the first benchmark scenario, the mono-Z exclusion (blue region) is the most stringent constraint for a large range of M_a values, excluding values of tanβ≲ 1.5 for M_a ≃ M_h. The dip of the exclusion limit at M_a ≃ 170 GeV coincides with the bound derived in (<ref>) if the low-mass signal region requirement E_T, miss^ cut = 90 GeV <cit.> is imposed. One also observes that for larger mediator masses the mono-Z exclusion strengthens until the point where M_a ≃ 220 GeV. This is a result of the constructive interference between triangle and box graphs (see Figure <ref>). The bound that follows from our 40 fb^-1 mono-Higgs projection (orange region) is compared to the mono-Z exclusion again rather weak.
§.§.§ Benchmark scenario 3: sinθ = 1/√(2), M_A = 500 GeV
Our third benchmark scenario employs sinθ = 1/√(2), M_A = 500 GeV and M_H = 750 GeV. Notice that for M_H = M_H^± the mixing in the pseudoscalar sector can be large since there are no constraints on sinθ and M_a from Δρ. The constraints on the M_a–tanβ plane corresponding to these parameter choices are presented in the lower left panel of Figure <ref>. The bounds from h → invisible decays (pink region) and flavour physics (dotted black line) are essentially unchanged with respect to the previous benchmarks. The shown di-top constraint (dashed black curve) differs from the one displayed in the upper left panel since it follows from the bound provided in <cit.> for a pseudoscalar with a mass of 500 GeV.
In the case of the mono-jet constraint (red region) one sees that it should now be possible to exclude tanβ≲ 0.4 values for M_a ≲ 350 GeV. One furthermore observes that future t t̅ + E_T, miss searches (green region) are expected to extend the parameter space excluded by the non-E_T, miss constraints to tanβ values above 1 for M_a ≲ 200 GeV. Although the scalar H is very heavy, we find that the mono-Z projection (blue region) still provides relevant constraints in the M_a–tanβ for masses below the a → t t̅ threshold, because the mixing angle θ is maximal in our third benchmark. The strongest E_T, miss constraint is however provided by the mono-Higgs search (orange region), which should be able to exclude values of tanβ≲ 2 for pseudoscalars a with masses at the EW scale. Notice that the mono-Higgs exclusion has a sharp cut-off at M_a ≃ 350 GeV, as expect from the inequality (<ref>) for E_T, miss^ cut = 105 GeV <cit.>.
§.§.§ Benchmark scenario 4: sinθ = 1/√(2), M_A = 300 GeV
In the fourth benchmark we consider the parameters sinθ = 1/√(2), M_A = 300 GeV and M_H = 750 GeV. As can be seen from the lower right panel of Figure <ref>, the regions excluded by Higgs to invisible decays (pink region) and flavour physics (dotted black line) are close to identical to those arising in all the other scenarios. In contrast, di-top searches do not lead to a restriction because the pseudoscalar A is too light to decay to two on-shell top quarks, while the ATLAS search <cit.> is not yet sensitive to very heavy scalars H.
The shapes of the exclusions from the j + E_T, miss (red region) and t t̅ + E_T, miss (green region) measurements display an interference pattern that is very similar to the one seen in Figure <ref>. In turn future mono-jet (t t̅ + E_T, miss) searches are expected to be able to exclude tanβ≲ 0.4 (tanβ≲ 1) values for mediator masses M_a above the t t̅ threshold. Focusing our attention on the mono-Z projection (blue region) we observe that the corresponding exclusion curve has a pronounced dip at M_a ≃ 180 GeV. It originates from the interference of triangle diagrams with box graphs that correspond to gg → AZ → Z+χχ̅ (see Figure <ref>). This interference is destructive and maximal when the decay channel A → aZ starts to close, leading to Br (A →χχ̅ ) ≃ 100% for the considered value of M_A.
Like in the third benchmark the mono-Higgs search (orange region) is again the most powerful E_T, miss constraint as it allows to exclude tanβ≲ 3.7 values for M_a ≃ 100 GeV. We also note that the mono-Higgs search maintains sensitivity for M_a values well above the estimate presented in (<ref>). The reason is that for sufficiently light pseudoscalars A, triangle diagrams with resonant a exchange (see Figure <ref>) can provide a sizeable contribution to mono-Higgs production. This resonant enhancement allows one to probe values of tanβ above 1 for M_a ≳ 300 GeV. Notice finally that at M_a ≃ M_A = 300 GeV the a and A contributions interfere destructively leading to a visible dip in the h + E_T, miss exclusion.
§.§ LHC Run II reach
The future prospects of the mono-Z (blue regions) and mono-Higgs (orange regions) constraints are illustrated in Figure <ref> for our four benchmark scenarios. We find that by collecting more data the reach of the Z + E_T, miss measurements are expected to strengthen, but that the actual improvement depends sensitively on the assumption about the systematic uncertainty on the irreducible SM backgrounds. Assuming a systematic error of 7%, we observe that the limits on tanβ will improve by a mere 10% when going from 40 fb^-1 to 300 fb^-1 of data. In order to further exploit the potential of mono-Z searches, advances in the modelling of ZZ production within the SM would hence be very welcome.
In contrast to mono-Z searches it turns out that in the case of the h + E_T, miss measurements systematic uncertainties will not be a limiting factor even at the end of LHC Run II. By increasing the amount of data to 100 fb^-1 and 300 fb^-1, we anticipate that it should be possible to improve the 40 fb^-1 mono-Higgs limits on tanβ by typically 25% and 50%, respectively. Notice that larger data sets will be most beneficial in our first and second benchmark scenario in which sinθ is small. In these cases the resulting h + E_T, miss (h →γγ) event rates are so low that the sensitivity in the mono-Higgs channel is limited largely by statistics for 40 fb^-1 of luminosity.
As explained earlier in Section <ref>, we expect that forthcoming searches for spin-0 resonances in the τ^+ τ^- final state should allow to set relevant constraints on tanβ in model realisations with a light scalar H of mass M_H< 2 m_t. In the case of our second benchmark scenario this means that it should be possible to test and to exclude the parameter space with tanβ≲ O (1) and M_a ≳ 210 GeV at LHC Run II. Such an exclusion would indeed be precious, because as illustrated by the upper right panel of Figure <ref>, this part of the M_a–tanβ plane is notoriously difficult to constrain through E_T, miss searches.
Finally, let us comment on an effect already mentioned briefly in Sections <ref> and <ref>. In pseudoscalar extensions of the THDM that feature a tanβ enhancement of the bottom-quark Yukawa coupling it is possible in principle to obtain relevant contributions to mono-X signals not only from the gg → Z/h + E_T, miss transitions but also from the b b̅→ Z/h + E_T, miss channels. In Figure <ref> only the model-independent contribution from gg production was taken into account, because the exclusion bounds remain essentially unchanged if also the b b̅-initiated channels are included.
With 300 fb^-1 of integrated luminosity this situation is however expected to change. Searches for mono-Z signals, for example, should be able to exclude values of tanβ≳ O (8) for certain ranges of M_a in all four benchmarks. In the third and fourth benchmark scenario there are particularly relevant changes to the projected sensitivity of mono-Higgs searches, as illustrated in Figure <ref>. For M_A = 500 GeV (left panel) we observe that, after including both gg and b b̅ initiated production, model realisations with tanβ≳ 10 for M_a ≲ 220 GeV are excluded. The impact of b b̅→ h + E_T, miss is even more pronounced for a light A with M_A = 300 GeV (right panel). In this case we see that it should be possible to exclude masses M_a ≲ 170 GeV for any value of tanβ. The results displayed in Figure <ref> have been obtained in the context of a Yukawa sector of type II. Almost identical sensitivities are found in models of type IV, while in pseudoscalar THDM extensions of type I and III bottom-quark initiated contributions are irrelevant, since they are tanβ suppressed (see (<ref>)).
§ CONCLUSIONS
We have proposed a new framework of renormalisable simplified models for dark matter searches at the LHC, namely single-mediator extensions of two Higgs doublet models containing a fermionic dark matter candidate. The mediator can have both scalar or pseudoscalar quantum numbers and all amplitudes are unitary as long as the mediator couplings are perturbative. Constraints from Higgs coupling measurements are averted by mixing the mediator with the heavy scalar or pseudoscalar partners of the Standard Model Higgs. This framework unifies previously established simplified spin-0 models, while avoiding their shortcomings, and can reproduce several of their features in the appropriate limit.
In this work we have focused on the case of a pseudoscalar mediator a. We have considered the alignment/decoupling limit, in which some of the Higgs partners have masses close to the TeV scale, while either the neutral scalar H or pseudoscalar A is lighter with a mass as low as 300 GeV. For the mass of the new pseudoscalar mediator we have considered the range of half the Higgs-boson mass to 500 GeV. These parameter choices are well motivated by Higgs physics, LHC searches for additional spin-0 states, electroweak precision measurements and quark-flavour bounds such as those arising from B → X_s γ and B-meson mixing. Limits on the quartic couplings that arise from perturbativity, unitarity and the requirement that the total decay widths of H and A are sufficiently small for the narrow-width approximation to be valid have also been taken into account in our analysis.
By studying the partial decay widths and branching ratios of the spin-0 particles, we have found that the total decay width of the heavier scalar H can be dominated by the H → a Z channel, while the heavier pseudoscalar A generically decays with large probability through A → a h. In consequence, the production cross sections for mono-Z and mono-Higgs final states are resonantly enhanced and the obtained limits are competitive with mono-jet searches and even impose the dominant constraints for most of the parameter space at 40 fb^-1 of 13 TeV LHC data. This surprising result is a consequence of a consistent implementation of the scalar sector and is therefore not predicted by previously considered simplified models (such as the ATLAS/CMS Dark Matter Forum pseudoscalar model). Our findings underline the importance of a complementary approach to searches for dark matter at the LHC and are in qualitative agreement with the conclusions drawn in <cit.>.
We have furthermore emphasised, that searches for associated production of dark matter with a tt̅ pair will profit from improved statistics unlike the mono-jet search, for which the reach seems systematics limited. We have therefore extrapolated the corresponding constraints to a dataset of 300 fb^-1, where t t̅ + E_T, miss searches are expected to be more powerful than j + E_T, miss measurements for large parts of the parameter space.
The rich structure of the two Higgs doublet plus pseudoscalar models has been exemplified by an analysis of four different parameter scenarios. The specific benchmarks have been chosen to capture different aspects of the mono-X phenomenology that are of interest for future LHC searches. The results for all scenarios are presented in the form of M_a–tanβ planes, in which the parameter regions that are excluded at 95% confidence level by the various E_T, miss and non-E_T, miss searches have been indicated (see Figure <ref>). We found that the constraining power of mono-Z and mono-Higgs searches depends sensitively on the mass hierarchies between M_a and M_A or M_H, while the sensitivity to the other model parameters such as the amount of mixing in the CP-odd sector is less pronounced. It has also been shown that as a result of the interference of a and A contributions the bounds in the M_a–tanβ plane that result from the j + E_T, miss and t t̅ + E_T, miss channels strengthen above the threshold M_a ≃ M_A in model realisations with a light pseudoscalar A. In addition the reach of the 13 TeV LHC in the mono-Z and mono-Higgs channel has been explored (see Figure <ref>). While mono-Higgs searches are found not to be limited by systematic uncertainties even at the end of LHC Run II, in the case of the mono-Z measurements the systematic error can become a limiting factor. This feature makes the h + E_T, miss signal particularly interesting in the context of two Higgs doublet plus pseudoscalar extensions.
It has moreover been pointed out that constraints from di-top resonance searches and flavour observables provide further important handles to test the considered simplified dark matter models. Because the former signature allows to look for neutral spin-0 states with masses above the t t̅ threshold, to which E_T, miss searches have only limited access if the dark matter mediators are top-philic, the development of more sophisticated strategies to search for heavy neutral Higgses in t t̅ events seems particularly timely. We have also highlighted the possibility to constrain benchmark scenarios featuring a light scalar H by forthcoming searches for heavy spin-0 states in the τ^+ τ^- final state, and finally illustrated the impact of bottom-quark initiated production in the case of h + E_T, miss (see Figure <ref>).
To conclude, we stress that meaningful bounds from LHC searches for dark matter can only be extracted if the underlying models are free from theoretical inconsistencies, such as non-unitary scattering amplitudes or couplings that implicitly violate gauge symmetries. Future ATLAS and CMS analyses of spin-0 mediator scenarios should therefore be based on consistent embeddings of the established ATLAS/CMS Dark Matter Forum simplified models. For any effort in this direction, standalone UFO implementation of the dark matter models discussed in this article can be obtained from the authors on request.
We thank all participants of the fourth LHC Dark Matter Working Group public meeting, in particular Nicole Bell, Giorgio Busoni and Jose Miguel No, for interesting discussions. We are grateful to Stefan Liebler for pointing out the potential relevance of bottom-quark initiated production processes. UH acknowledges partial support by the ERC Consolidator Grant HICCUP (No. 614577) and thanks the CERN Theoretical Physics Department for hospitality.
10Abdallah:2014hon
J. Abdallah et al.,
arXiv:1409.2893 [hep-ph].
Abdallah:2015ter
J. Abdallah et al.,
Phys. Dark Univ. 9-10, 8 (2015)
[arXiv:1506.03116 [hep-ph]].
Abercrombie:2015wmb
D. Abercrombie et al.,
arXiv:1507.00966 [hep-ex].
Chala:2015ama
M. Chala, F. Kahlhoefer, M. McCullough, G. Nardini and K. Schmidt-Hoberg,
JHEP 1507, 089 (2015)
[arXiv:1503.05916 [hep-ph]].
Bell:2015sza
N. F. Bell, Y. Cai, J. B. Dent, R. K. Leane and T. J. Weiler,
Phys. Rev. D 92, no. 5, 053008 (2015)
[arXiv:1503.07874 [hep-ph]].
Kahlhoefer:2015bea
F. Kahlhoefer, K. Schmidt-Hoberg, T. Schwetz and S. Vogl,
JHEP 1602, 016 (2016)
[arXiv:1510.02110 [hep-ph]].
Bell:2015rdw
N. F. Bell, Y. Cai and R. K. Leane,
JCAP 1601, no. 01, 051 (2016)
[arXiv:1512.00476 [hep-ph]].
Haisch:2016usn
U. Haisch, F. Kahlhoefer and T. M. P. Tait,
Phys. Lett. B 760, 207 (2016)
[arXiv:1603.01267 [hep-ph]].
Englert:2016joy
C. Englert, M. McCullough and M. Spannowsky,
Phys. Dark Univ. 14, 48 (2016)
[arXiv:1604.07975 [hep-ph]].
Duerr:2016tmh
M. Duerr, F. Kahlhoefer, K. Schmidt-Hoberg, T. Schwetz and S. Vogl,
JHEP 1609, 042 (2016)
[arXiv:1606.07609 [hep-ph]].
Bauer:2016gys
A. Albert et al.,
Phys. Dark Univ. 16, 49 (2017)
[arXiv:1607.06680 [hep-ex]].
Deshpande:1977rw
N. G. Deshpande and E. Ma,
Phys. Rev. D 18, 2574 (1978).
Ma:2006km
E. Ma,
Phys. Rev. D 73, 077301 (2006)
[hep-ph/0601225].
Barbieri:2006dq
R. Barbieri, L. J. Hall and V. S. Rychkov,
Phys. Rev. D 74, 015007 (2006)
[hep-ph/0603188].
LopezHonorez:2006gr
L. Lopez Honorez, E. Nezri, J. F. Oliver and M. H. G. Tytgat,
JCAP 0702 (2007) 028
[hep-ph/0612275].
ATLAS-CONF-2015-044
The ATLAS and CMS Collaborations,
https://cds.cern.ch/record/2052552/files/ATLAS-CONF-2015-044.pdfATLAS-CONF-2015-044.
partII
M. Bauer, U. Haisch and F. Kahlhoefer, in preparation.
Bell:2016ekl
N. F. Bell, G. Busoni and I. W. Sanderson,
JCAP 1703, no. 03, 015 (2017)
[arXiv:1612.03475 [hep-ph]].
Ipek:2014gua
S. Ipek, D. McKeen and A. E. Nelson,
Phys. Rev. D 90, no. 5, 055021 (2014)
[arXiv:1404.3716 [hep-ph]].
No:2015xqa
J. M. No,
Phys. Rev. D 93, no. 3, 031701 (2016)
[arXiv:1509.01110 [hep-ph]].
Goncalves:2016iyg
D. Goncalves, P. A. N. Machado and J. M. No,
Phys. Rev. D 95, no. 5, 055027 (2017)
[arXiv:1611.04593 [hep-ph]].
Aaboud:2016tnv
M. Aaboud et al. [ATLAS Collaboration],
Phys. Rev. D 94, no. 3, 032005 (2016)
[arXiv:1604.07773 [hep-ex]].
CMS:2016pod
The CMS Collaboration,
https://cds.cern.ch/record/2205746/files/EXO-16-037-pas.pdfCMS-PAS-EXO-16-037.
ATLAS:2016ljb
The ATLAS Collaboration,
https://cds.cern.ch/record/2206132/files/ATLAS-CONF-2016-050.pdfATLAS-CONF-2016-050.
CMS:2016mxc
The CMS Collaboration,
https://cds.cern.ch/record/2204933/files/EXO-16-005-pas.pdfCMS-PAS-EXO-16-005.
ATLAS-CONF-2016-086
The ATLAS Collaboration,
http://cds.cern.ch/record/2206279/files/ATLAS-CONF-2016-086.pdfATLAS-CONF-2016-086.
CMS:2016uxr
The CMS Collaboration,
http://cds.cern.ch/record/2138506/files/B2G-15-007-pas.pdfCMS-PAS-B2G-15-007.
ATLAS:2016bza
The ATLAS Collaboration,
https://cds.cern.ch/record/2206138/files/ATLAS-CONF-2016-056.pdfATLAS-CONF-2016-056.
CMS:2016hmx
The CMS Collaboration,
https://inspirehep.net/record/1479665/files/EXO-16-038-pas.pdfCMS-PAS-EXO-16-038.
Sirunyan:2017onm
A. M. Sirunyan et al. [CMS Collaboration],
JHEP 1703, 061 (2017)
[arXiv:1701.02042 [hep-ex]].
Aaboud:2016obm
M. Aaboud et al. [ATLAS Collaboration],
Phys. Lett. B 765, 11 (2017)
[arXiv:1609.04572 [hep-ex]].
CMS:2016mjh
The CMS Collaboration,
https://cds.cern.ch/record/2202804/files/EXO-16-012-pas.pdf CMS-PAS-EXO-16-012.
ATLAS-CONF-2016-011
The ATLAS Collaboration,
http://cds.cern.ch/record/2139812/files/ATLAS-CONF-2016-011.pdfATLAS-CONF-2016-011.
CMS:2016xok
The CMS Collaboration,
https://cds.cern.ch/record/2204916/files/EXO-16-011-pas.pdfCMS-PAS-EXO-16-011.
Aaboud:2016zkn
M. Aaboud et al. [ATLAS Collaboration],
Phys. Lett. B 762, 334 (2016)
[arXiv:1606.03977 [hep-ex]].
Khachatryan:2016jww
V. Khachatryan et al. [CMS Collaboration],
arXiv:1612.09274 [hep-ex].
Aad:2015pla
G. Aad et al. [ATLAS Collaboration],
JHEP 1511, 206 (2015)
[arXiv:1509.00672 [hep-ex]].
Khachatryan:2016whc
V. Khachatryan et al. [CMS Collaboration],
arXiv:1610.09218 [hep-ex].
Haisch:2015ioa
U. Haisch and E. Re,
JHEP 1506, 078 (2015)
[arXiv:1503.00691 [hep-ph]].
ATLAS:2016pyq
The ATLAS Collaboration,
http://cds.cern.ch/record/2206229/files/ATLAS-CONF-2016-073.pdfATLAS-CONF-2016-073.
HaischTeVPA16
U. Haisch,
https://indico.cern.ch/event/469963/contributions/2277614/attachments/1334210/2008314/TeVPA16_Uli.pdftalk “Dark matter at the LHC: Effective field theories, simplified models
& beyond” at TeVPA, 2016.
Gunion:1989we
J. F. Gunion, H. E. Haber, G. L. Kane and S. Dawson,
Front. Phys. 80, 1 (2000).
Branco:2011iw
G. C. Branco, P. M. Ferreira, L. Lavoura, M. N. Rebelo, M. Sher and J. P. Silva,
Phys. Rept. 516, 1 (2012)
[arXiv:1106.0034 [hep-ph]].
Glashow:1976nt
S. L. Glashow and S. Weinberg,
Phys. Rev. D 15, 1958 (1977).
Paschos:1976ay
E. A. Paschos,
Phys. Rev. D 15, 1966 (1977).
Haisch:2012kf
U. Haisch, F. Kahlhoefer and J. Unwin,
JHEP 1307, 125 (2013)
[arXiv:1208.4605 [hep-ph]].
ATLAS-CONF-2016-085
The ATLAS Collaboration,
http://cds.cern.ch/record/2206278/files/ATLAS-CONF-2016-085.pdfATLAS-CONF-2016-085.
CMS:2016rjp
The CMS Collaboration,
https://cds.cern.ch/record/2231507/files/HIG-16-037-pas.pdfCMS-PAS-HIG-16-037.
Bobeth:2001sq
C. Bobeth, T. Ewerth, F. Krüger and J. Urban,
Phys. Rev. D 64, 074014 (2001)
[hep-ph/0104284].
CMS:2014xfa
V. Khachatryan et al. [CMS and LHCb Collaborations],
Nature 522, 68 (2015)
[arXiv:1411.4413 [hep-ex]].
Anastasiou:2016hlm
C. Anastasiou, C. Duhr, F. Dulat, E. Furlan, T. Gehrmann, F. Herzog, A. Lazopoulos and B. Mistlberger,
JHEP 1609, 037 (2016)
[arXiv:1605.05761 [hep-ph]].
Dicus:1994bm
D. Dicus, A. Stange and S. Willenbrock,
Phys. Lett. B 333, 126 (1994)
[hep-ph/9404359].
Frederix:2007gi
R. Frederix and F. Maltoni,
JHEP 0901, 047 (2009)
[arXiv:0712.2355 [hep-ph]].
Djouadi:2015jea
A. Djouadi, L. Maiani, A. Polosa, J. Quevillon and V. Riquer,
JHEP 1506, 168 (2015)
[arXiv:1502.05653 [hep-ph]].
Craig:2015jba
N. Craig, F. D'Eramo, P. Draper, S. Thomas and H. Zhang,
JHEP 1506, 137 (2015)
[arXiv:1504.04630 [hep-ph]].
Denner:1991ie
A. Denner, R. J. Guth, W. Hollik and J. H. Kühn,
Z. Phys. C 51, 695 (1991).
Haisch:2007ia
U. Haisch and A. Weiler,
Phys. Rev. D 76, 074027 (2007)
[arXiv:0706.2054 [hep-ph]].
Freitas:2012sy
A. Freitas and Y. C. Huang,
JHEP 1208, 050 (2012)
Erratum: [JHEP 1305, 074 (2013)]
Erratum: [JHEP 1310, 044 (2013)]
[arXiv:1205.0299 [hep-ph]].
Hermann:2012fc
T. Hermann, M. Misiak and M. Steinhauser,
JHEP 1211, 036 (2012)
[arXiv:1208.2788 [hep-ph]].
Misiak:2015xwa
M. Misiak et al.,
Phys. Rev. Lett. 114, no. 22, 221801 (2015)
[arXiv:1503.01789 [hep-ph]].
Czakon:2015exa
M. Czakon, P. Fiedler, T. Huber, M. Misiak, T. Schutzmeier and M. Steinhauser,
JHEP 1504, 168 (2015)
[arXiv:1503.01791 [hep-ph]].
Abbott:1979dt
L. F. Abbott, P. Sikivie and M. B. Wise,
Phys. Rev. D 21, 1393 (1980).
Geng:1988bq
C. Q. Geng and J. N. Ng,
Phys. Rev. D 38, 2857 (1988)
Erratum: [Phys. Rev. D 41, 1715 (1990)].
Buras:1989ui
A. J. Buras, P. Krawczyk, M. E. Lautenbacher and C. Salazar,
Nucl. Phys. B 337, 284 (1990).
Eberhardt:2013uba
O. Eberhardt, U. Nierste and M. Wiebusch,
JHEP 1307, 118 (2013)
[arXiv:1305.1649 [hep-ph]].
Khachatryan:2015qxa
V. Khachatryan et al. [CMS Collaboration],
JHEP 1511, 018 (2015)
[arXiv:1508.07774 [hep-ex]].
ATLAS:2016qiq
The ATLAS Collaboration,
https://inspirehep.net/record/1480448/files/ATLAS-CONF-2016-089.pdfATLAS-CONF-2016-089.
Haber:1992py
H. E. Haber and A. Pomarol,
Phys. Lett. B 302, 435 (1993)
[hep-ph/9207267].
Pomarol:1993mu
A. Pomarol and R. Vega,
Nucl. Phys. B 413, 3 (1994)
[hep-ph/9305272].
Gerard:2007kn
J.-M. Gerard and M. Herquet,
Phys. Rev. Lett. 98, 251802 (2007)
[hep-ph/0703051].
Grzadkowski:2010dj
B. Grzadkowski, M. Maniatis and J. Wudka,
JHEP 1111, 030 (2011)
[arXiv:1011.5228 [hep-ph]].
Haber:2010bw
H. E. Haber and D. O'Neil,
Phys. Rev. D 83, 055017 (2011)
[arXiv:1011.6188 [hep-ph]].
Olive:2016xmw
C. Patrignani et al. [Particle Data Group Collaboration],
Chin. Phys. C 40, no. 10, 100001 (2016).
Gunion:2002zf
J. F. Gunion and H. E. Haber,
Phys. Rev. D 67, 075019 (2003)
[hep-ph/0207010].
Barroso:2013awa
A. Barroso, P. M. Ferreira, I. P. Ivanov and R. Santos,
JHEP 1306, 045 (2013)
[arXiv:1303.5098 [hep-ph]].
Kanemura:1993hm
S. Kanemura, T. Kubota and E. Takasugi,
Phys. Lett. B 313, 155 (1993)
[hep-ph/9303263].
Akeroyd:2000wc
A. G. Akeroyd, A. Arhrib and E. M. Naimi,
Phys. Lett. B 490, 119 (2000)
[hep-ph/0006035].
Ginzburg:2005dt
I. F. Ginzburg and I. P. Ivanov,
Phys. Rev. D 72, 115010 (2005)
[hep-ph/0508020].
Grinstein:2015rtl
B. Grinstein, C. W. Murphy and P. Uttayarat,
JHEP 1606, 070 (2016)
[arXiv:1512.04567 [hep-ph]].
Djouadi:1995gv
A. Djouadi, J. Kalinowski and P. M. Zerwas,
Z. Phys. C 70, 435 (1996)
[hep-ph/9511342].
Aad:2014aba
G. Aad et al. [ATLAS Collaboration],
Phys. Rev. D 90, no. 5, 052004 (2014)
[arXiv:1406.3827 [hep-ex]].
Khachatryan:2014jba
V. Khachatryan et al. [CMS Collaboration],
Eur. Phys. J. C 75, no. 5, 212 (2015)
[arXiv:1412.8662 [hep-ex]].
Fox:2012ru
P. J. Fox and C. Williams,
Phys. Rev. D 87, no. 5, 054030 (2013)
[arXiv:1211.6390 [hep-ph]].
Haisch:2013ata
U. Haisch, F. Kahlhoefer and E. Re,
JHEP 1312, 007 (2013)
[arXiv:1310.4491 [hep-ph]].
Haisch:2013fla
U. Haisch, A. Hibbs and E. Re,
Phys. Rev. D 89, 034009 (2014)
[arXiv:1311.7131 [hep-ph]].
Buckley:2014fba
M. R. Buckley, D. Feld and D. Goncalves,
Phys. Rev. D 91, 015017 (2015)
[arXiv:1410.6497 [hep-ph]].
Harris:2014hga
P. Harris, V. V. Khoze, M. Spannowsky and C. Williams,
Phys. Rev. D 91, 055009 (2015)
[arXiv:1411.0535 [hep-ph]].
Mattelaer:2015haa
O. Mattelaer and E. Vryonidou,
Eur. Phys. J. C 75, no. 9, 436 (2015)
[arXiv:1508.00564 [hep-ph]].
Arina:2016cqj
C. Arina et al.,
JHEP 1611, 111 (2016)
[arXiv:1605.09242 [hep-ph]].
Bishara:2016jga
F. Bishara, U. Haisch, P. F. Monni and E. Re,
Phys. Rev. Lett. 118, no. 12, 121801 (2017)
[arXiv:1606.09253 [hep-ph]].
Lin:2013sca
T. Lin, E. W. Kolb and L. T. Wang,
Phys. Rev. D 88, no. 6, 063510 (2013)
[arXiv:1303.6638 [hep-ph]].
Artoni:2013zba
G. Artoni, T. Lin, B. Penning, G. Sciolla and A. Venturini,
arXiv:1307.7834 [hep-ex].
Backovic:2015soa
M. Backovic, M. Krämer, F. Maltoni, A. Martini, K. Mawatari and M. Pellen,
Eur. Phys. J. C 75, no. 10, 482 (2015)
[arXiv:1508.05327 [hep-ph]].
Haisch:2016gry
U. Haisch, P. Pani and G. Polesello,
JHEP 1702, 131 (2017)
[arXiv:1611.09841 [hep-ph]].
Harlander:2013mla
R. V. Harlander, S. Liebler and T. Zirke,
JHEP 1402, 023 (2014)
[arXiv:1307.8122 [hep-ph]].
Degrande:2011ua
C. Degrande, C. Duhr, B. Fuks, D. Grellscheid, O. Mattelaer and T. Reiter,
Comput. Phys. Commun. 183, 1201 (2012)
[arXiv:1108.2040 [hep-ph]].
Alloul:2013bka
A. Alloul, N. D. Christensen, C. Degrande, C. Duhr and B. Fuks,
Comput. Phys. Commun. 185, 2250 (2014)
[arXiv:1310.1921 [hep-ph]].
Degrande:2014vpa
C. Degrande,
Comput. Phys. Commun. 197, 239 (2015)
[arXiv:1406.3030 [hep-ph]].
Alwall:2014hca
J. Alwall et al.,
JHEP 1407, 079 (2014)
[arXiv:1405.0301 [hep-ph]].
Sjostrand:2014zea
T. Sjöstrand et al.,
Comput. Phys. Commun. 191, 159 (2015)
[arXiv:1410.3012 [hep-ph]].
Ball:2012cx
R. D. Ball et al.,
Nucl. Phys. B 867, 244 (2013)
[arXiv:1207.1303 [hep-ph]].
Dercks:2016npn
D. Dercks, N. Desai, J. S. Kim, K. Rolbiecki, J. Tattersall and T. Weber,
arXiv:1611.09856 [hep-ph].
Cacciari:2011ma
M. Cacciari, G. P. Salam and G. Soyez,
Eur. Phys. J. C 72, 1896 (2012)
[arXiv:1111.6097 [hep-ph]].
deFavereau:2013fsa
J. de Favereau et al. [DELPHES 3 Collaboration],
JHEP 1402, 057 (2014)
[arXiv:1307.6346 [hep-ex]].
Conte:2012fm
E. Conte, B. Fuks and G. Serret,
Comput. Phys. Commun. 184, 222 (2013)
[arXiv:1206.1599 [hep-ph]].
Conte:2014zja
E. Conte, B. Dumont, B. Fuks and C. Wymant,
Eur. Phys. J. C 74, no. 10, 3103 (2014)
[arXiv:1405.3982 [hep-ph]].
Frederix:2012ps
R. Frederix and S. Frixione,
JHEP 1212, 061 (2012)
[arXiv:1209.6215 [hep-ph]].
Artoisenet:2012st
P. Artoisenet, R. Frederix, O. Mattelaer and R. Rietkerk,
JHEP 1303, 015 (2013)
[arXiv:1212.3460 [hep-ph]].
Kim:2008pp
Y. G. Kim, K. Y. Lee and S. Shin,
JHEP 0805, 100 (2008)
[arXiv:0803.2932 [hep-ph]].
Baek:2011aa
S. Baek, P. Ko and W. I. Park,
JHEP 1202, 047 (2012)
[arXiv:1112.1847 [hep-ph]].
LopezHonorez:2012kv
L. Lopez-Honorez, T. Schwetz and J. Zupan,
Phys. Lett. B 716, 179 (2012)
[arXiv:1203.2064 [hep-ph]].
Ko:2016ybp
P. Ko and J. Li,
Phys. Lett. B 765, 53 (2017)
[arXiv:1610.03997 [hep-ph]].
Baek:2017vzd
S. Baek, P. Ko and J. Li,
Phys. Rev. D 95, no. 7, 075011 (2017)
[arXiv:1701.04131 [hep-ph]].
|
http://arxiv.org/abs/1701.07638v2 | 20170126101707 | The impact of stochastic lead times on the bullwhip effect under correlated demand and moving average forecasts | [
"Zbigniew Michna",
"Stephen M. Disney",
"Peter Nielsen"
] | stat.AP | [
"stat.AP"
] |
Detector-Independent Verification of Quantum Light
I. A. Walmsley
December 30, 2023
==================================================
abbrv
We quantify the bullwhip effect (which measures how the variance of replenishment orders is amplified as the orders move up the supply chain) when random demands and random lead times are estimated using the industrially popular moving average forecasting method. We assume that the lead times constitute a sequence of independent identically distributed random variables and correlated demands are described by a first-order autoregressive process.
We obtain an expression that reveals the impact of demand and lead time forecasting on the bullwhip effect. We draw a number of conclusions on the bullwhip behaviour with respect to the demand auto-correlation and the number of past lead times and demands used in the forecasts. Furthermore, we find the maxima and minima in the bullwhip measure as a function of the demand auto-correlation.
Keywords: supply chain, bullwhip effect,
order-up-to replenishment policy, AR(1) demand, stochastic lead time, moving average forecasting method.
§ INTRODUCTION
The variability of replenishment orders often increases as they flow upstream in supply chains. This phenomenon is known as the bullwhip effect and has been discussed in the economics and operations management literature for 100 and 50 years, respectively – see Mitchell <cit.> and Forrester <cit.>. The celebrated works of Lee et al., <cit.> and <cit.> promoted this problem to the forefront of the supply chain and operations management field. Wang and Disney <cit.> provide a recent literature review of the bullwhip field, categorising contributions according to the five causes of bullwhip of Lee et al.: demand forecasting, non-zero lead time, supply shortage, order batching and price fluctuation. Of particular importance to this paper are the results of Chen et al., <cit.>, <cit.> and Dejonckheere et al., <cit.>. These contributions investigate the bullwhip consequences of using the moving average forecasting method inside the order-up-to (OUT) replenishment policy.
Recently Michna and Nielsen <cit.> identified another critical cause of the bullwhip – the forecasting of lead times. While the issue of stochastic lead times in bullwhip studies has not been intensively investigated, Michna and Nielsen <cit.> and Michna et al., <cit.> provide a recent literature review of this problem. Of particular importance is the work of Duc et al., <cit.> and Kim et al., <cit.> where the impact of stochastic lead times on bullwhip is quantified. These works characterise the impact of random lead times on the bullwhip effect via mean values and variances. However, they do not consider the consequences of having to estimate the lead time distribution (a.k.a. lead time forecasting). As identified by Michna and Nielsen <cit.> and Michna et al., <cit.> this can be a significant cause of the bullwhip effect. In Duc et al., <cit.> lead times are assumed to be stochastic and drawn from a known distribution and thus are not forecasted when placing an order. Kim et al., <cit.> used the moving average technique to forecast lead time demand, as did Michna et al., <cit.>.
The influence of stochastic lead time on inventory is a more established field and we refer to the work of Bagchi et al., <cit.>, Chaharsooghi et al., <cit.>, Song <cit.> and <cit.>, and Zipkin <cit.>. Stochastic lead time inventory research can be classified into two general streams: those with order crossovers and those without crossovers. An order crossover happens when replenishments are received in a different sequence from which the orders were placed (see e.g. Bischak et al., <cit.>, Bradley and Robinson <cit.>, Disney et al., <cit.> and Wang and Disney <cit.>). Disney et al., <cit.> consider the safety stock and inventory cost consequences of using the OUT and proportional order-up-to (POUT) replenishment policies under i.i.d. demand. They show that the POUT policy is always more economical than the OUT policy when order-crossover is present. Wang and Disney <cit.> show that the POUT policy outperforms the OUT policy in the presence of order crossovers in the sense of minimizing inventory variance when demand is an Auto-Regressive, Moving Average process with p auto-regressive terms and q moving average terms, ARMA(p,q).
The papers of Boute at el. <cit.>, <cit.>, <cit.> investigate endogenous lead times in supply chains. Endogenous lead times are dependent on the state of the system as they are function of the previous orders. Here the supplier is modelled as a queue and orders are processed on a first come, first served basis, hence there is no order-crossover. However, as the sojourn time in the queue increases in the variance of the demand placed on the manufacturer, a lead time reduction can be obtained by smoothing the replenishment orders. This lead time reduction can potentially reduce safety stock requirements. Hum and Parlar <cit.> also model lead times using queueing theory, analyzing the proportion of demand that can be met within a specific lead time.
We have observed that stochastic lead times and order-crossovers are quite common within factories (see Fig. <ref>). The data represents a single, high volume, product from a supplier of industrial measuring and testing equipment. The distribution of the lead times is discrete and aggregated into weekly buckets to reflect the actual practice of creating weekly production plans using the OUT policy (for more information on why this is so, we refer to the assumptions and modelling choices discussed later in this section). Fig. <ref> also highlights the number of queue positions each production batch gained or lost between the two lists of date sorted production releases and production completions. As this manufacturer manually moved totes of products between process steps within its job shop, a large number of order-crossovers is present. Disney et al., <cit.> present similar findings in global supply chains (see Figs. <ref> and <ref> of <cit.>), where stochastic lead times and order crossovers could be observed in global shipping lanes. Here containers could also gain or lose positions in the date ordered list of dispatches and receipts. We also observe differences in quoted (at the time of shipping) and actual (realised when the container arrives) lead times in global shipping lanes (see Fig. <ref>).
We consider a model where a supply chain member (who could be a retailer, manufacturer, or supplier for example, but we call a manufacturer for convenience) observes both random demands from his customer and random lead times from his supplier which we assume to be exogenous (that is, they are independent of all other system states). The manufacturer generates replenishment orders to maintain inventory levels by projecting his customers' future demands over his supplier's lead time, accounting for both the available inventory and the open orders in the replenishment pipeline.
This research differs from previous research in several ways. Most importantly we show that lead time forecasting is a major cause of bullwhip when demands are auto-correlated. This confirms and extends the results of Michna and Nielsen <cit.>. We also quantify the impact of the stochastic auto-correlated demands and stochastic lead times on the bullwhip effect under the assumption that demands and lead times are forecasted separately using moving averages. Furthermore, we investigate the bullwhip effect as a function of the demand auto-correlation, the characteristics of the lead time distribution and the number of past demands and the delay parameter in the moving average lead time forecasts. The bullwhip conclusions differ depending on how the parameters are combined. We find maxima and minima in the bullwhip metric as a function of the demand auto-correlation.
Moreover our main result contains, as special cases, the bullwhip formulas of Chen et al., <cit.> (a constant lead time) and Th. 1 in Michna and Nielsen <cit.> (mutually independent demands). The formulation presented in this research involves more parameters, is more general, and allows us to understand more intricate supply chain settings.
Our major assumptions and modelling choices are summarised as follows:
a) The supply chain consists of two stages – a manufacturer who receives client's demands and deliveries from a supplier (or manufacturing process).
b) A periodic replenishment system exists where the demands, D_t, are satisfied and previous orders placed are received during a time period, indexed by the subscript t. At the end of the period, the inventory level, demand and lead times of received orders are observed and a new replenishment order, q_t, is placed. The length of the period could be an hour, day, week or month, but in our experience it is often a week in manufacturing contexts. Note that the receipt of an order is observed only at the end of the period and the lead time is a non-negative integer. An order with zero lead time would be received instantaneously after the order was placed, but its receipt would only be incorporated into the order made at the end of the next period due to the sequence of events delay.
c) The demand constitutes an autoregressive model of order one, AR(1). We have elected to use the AR(1) model as it is the simplest demand process with autocorrelation, a feature commonly observed in real demand patterns, Lee et al., <cit.>. It is also a frequently adopted assumption in the bullwhip literature (e.g. in Chen at el. <cit.> and <cit.>, Duc et al., <cit.> and Lee et al., <cit.>), allowing comparison of our new results to established theory.
d) The lead times L_t ∈ℕ_0 constitute a sequence of independent identically distributed (iid) random variables which are independent of all system states, including the manufacturer's demand. Moreover we assume that lead times are bounded (e.g. L_t ≤ L^+ periods) and that the lead time forecasts are based on lead time information that is at least L^+ periods old. This allows use to create lead time forecasts that are unbiased. For example, if we based our lead time forecasts on the most recent lead time information (which we observe when we receive orders), some of the orders placed would still be open (not yet received) and our lead time estimates would only be based on those orders with short lead times. Basing our lead time estimates on data that is at least L^+ periods old is possible as lead times are assumed to be temporally independent and thus constitute a valid dataset for forecasting all future lead times. Practically this approach has the desirable characteristic that we can base our lead time estimates on realised lead times, rather than quoted lead times from the supplier or shipper, see Fig. <ref>. Furthermore, for ease of data organisation (and modelling) we can retrospectively assign the lead time of an order to the period the order was generated in our database (simulation).
e) The OUT policy is used to generate the orders placed onto the supplier. The OUT policy is industrially popular as it is commonly available native in many ERP/MRP systems. It has also been studied extensively in the academic literature (see e.g. Bishak at el. <cit.>, Chen at el. <cit.> and <cit.>, Dejonckheere at el. <cit.> and <cit.>, Duc at el. <cit.> and Kim at el. <cit.>). The OUT policy is also the optimal linear replenishment policy for minimizing inventory holding and backlog costs if orders do not cross (see Kaplan <cit.> and Wang and Disney <cit.>).
f) The manufacturer predicts the future demands over future lead times based on predictions generated using the moving average forecasts of past demand and observations of the lead times of previously received orders. Thus, the forecast of lead time demand is as follows
D_t^L=∑_i=0^L_t-1D_t+i ,
where L_t is the forecast of the lead time of the next order made at the beginning of period t and
D_t+i denotes the forecast for a demand for the period t+i made at the beginning of a period t.
As Michna and Nielsen <cit.>, the novel aspect of our approach is the last point f) and differs from much of the previous literature. For example, Duc et al., <cit.> assume the lead time of the order placed at time t is known when placing order leading to
D_t^L=∑_i=0^L_t-1D_t+i ,
However, we assume the manufacturer would not know the value of L_t until that order has been completed (arrived, received).
In Kim et al., <cit.> the lead time demand is predicted with
D_t^L=1/n∑_i=1^nD_t-i^L ,
where D_t-i^L is the past known (realized) lead time demand.
A different approach was taken by Bradley and Robinson <cit.> and Disney et al., <cit.> where it is assumed beforehand that the lead time distribution is known. That is, the lead time distribution can be observed from previous realisations of the lead time.
In our approach we show that the bullwhip effect measure contains new components depending on the lead time forecasting parameter, and the correlation coefficient between demands. This was not quantified in Michna and Nielsen <cit.>, neither was it included in the study of ARMA(p,q) demand in Wang and Disney <cit.>. These new terms amplify the value of the bullwhip measure and are evidence that lead time estimation in itself is a significant cause of the bullwhip effect, perhaps equally as important as demand forecasting.
§ SUPPLY CHAIN MODEL
We want to consider temporally dependent demands and the simplest way to achieve this is to model a manufacturer observing periodic customer demands, D_t, constituting of a stationary first-order autoregressive, AR(1), process,
D_t=μ_D+ρ (D_t-1-μ_D)+ϵ_t ,
where |ρ|<1 and {ϵ_t}_t=-∞^∞ is a sequence of independent
identically distributed random variables such that (ϵ_t)=0 and (ϵ_t)=σ^2.
Under the stationarity assumption it can be easy found that (D_t)=μ_D, (D_t)=σ^2_D=σ^2/1-ρ^2 and (D_t, D_t-k)=ρ^k (see for example, Chen et al., <cit.> and Duc et al., <cit.>). The distribution of D can be arbitrary but its second moment must be finite.
A random lead time L_t is assigned to each order at the beginning of time t. It is observed and used to predict future lead time when the order is received. The random lead times {L_t}_t=-∞^∞ are mutually iid random variables that is also assumed in Duc et al., <cit.>, Kim et al., <cit.>, Robinson et al., <cit.> and Disney et al., <cit.>. The expected value of the discrete lead times is L_t=∑_i=0^L^+i p_i=μ_L where p_i is the probability that the lead time is i periods long, L_t=∑_i=0^L^+p_i (i-μ_L)^2=σ_L^2.
We do not impose any assumptions on the distribution of L but that its second moment is finite and L is non-negative. The sequences {D_t}_t=-∞^∞ and {L_t}_t=-∞^∞ are mutually independent.
The lead time demand at the beginning of a period t is defined as follows
D_t^L=D_t+D_t+1+…...+D_t+L_t-1=
∑_i=0^L_t-1D_t+i ,
which reflects the demand over the lead time.
At the beginning of a period t the manufacturer does not know this value of L_t so he must forecast its value before calculating his replenishment order (see (<ref>)).
Let us notice that there is a dependency between D_t^L and L_t due to (<ref>). That is, the lead time demand forecast is a function of past lead times. Employing the moving average forecast method with the delay parameter n≥ 1 for demand forecasting we get
D_t+j=1/n∑_i=1^n D_t-i ,
where j=0,1,… and D_t-i i=1,2,…, n are previous demands which have been observed at the
beginning of period t. Here we use a simple moving average method. Thus the j-period ahead forecast of demand is a moving average of previous demands. Note all future forecasts, regardless of j, are straight line predictions of the current forecast. Clearly this is not an optimal, minimum mean squared error, forecast for AR(1) demand. However, it does reflect common industrial practice as the moving average forecast is available in many commercial ERP systems and can be readily incorporated into spreadsheets by analysts. It has also been studied from a theoretical basis (see Chen et al., <cit.>, Dejonckheere et al., <cit.>, Kim and Ryan <cit.>, Chatfield at el., <cit.> and Chatfield and Hayya <cit.>).
The manufacturer also predicts a lead time but here he has to be careful because the previous orders cannot be completely observed. Precisely, using the moving average forecast method with m≥ 1 for lead time forecasting we obtain
L_t=1/m∑_i=1^m L_t-i-L^+ ,
where L_t-i-L^+ i=1,2,…, m are lead times which are guaranteed to have been observed by the manufacturer at the beginning of a period t-i (or earlier) as they are at least L^+ periods old (see item d of our discussion of assumptions in 1). Knowing the average lead time (in practice estimating it) we are able to find the average unrealized orders (see Robinson et al., <cit.> and Disney et al., <cit.>).
However our procedure of collecting lead times avoids bias resulting from the open orders with long lead times that may not have been received when we make the lead time forecast.
Thus by (<ref>), (<ref>) and (<ref>) we propose the following forecast for a lead time demand (see also Michna and Nielsen <cit.>).
D_t^L=L_tD_t=
1/mn∑_i=1^n D_t-i∑_i=1^m L_t-i-L^+ .
It is easy to notice that (<ref>) is a slight modification of (<ref>) when demands and lead times are predicted using the moving average method.
The motivation for the lead time demand forecasting given in (<ref>) is also the fact
that D_t^L= L D see (<ref>) (under the assumption that demands and lead times are mutually independent) and employing the natural estimators of L and D we arrive at (<ref>).
Eq. (<ref>) has previously been used by Chatfield et al., <cit.>
in a simulation study that highlighted the relationship between lead time forecasting and the bullwhip effect.
We assume that the manufacturer uses the OUT policy. Let S_t be the desired inventory position at the beginning of a period t,
S_t=D_t^L+ ,
where is a constant, time invariant, target net stock (safety stock), set to achieve a desired level of availability or to minimize a set of unit inventory holding (h) and unit backlog (b) costs via the newsvendor principle, Silver et al., <cit.>. It is often assumed, in constant lead time scenarios, that the demand and the inventory levels, are normally distributed and thus
=zσ_t, z=Φ^-1(b/b+h)
holds, where Φ^-1(·) is the cumulative probability density function (cdf) of the standard normal distribution and
σ_t^2=(D_t^L-D_t^L)
is the variance of the forecast error for the lead time demand. In some articles (for example Chen et al., <cit.>) σ_t^2 is defined more practically. That is, instead of the variance, one takes the sample variance of D_t^L-D_t^L. This complicates the theoretical calculations somewhat and the estimation of σ_t^2 increases the bullwhip effect which can be deduced from the fact that Chen et al.'s <cit.> formula is a lower bound for the bullwhip measure whereas we get an equality.
Note however, in our setting, even when demand is normally distributed, neither the inventory levels, nor the orders, are normally distributed. Rather the stochastic lead times create a multi-modal inventory distribution (as it did in Disney et al., <cit.>) and the lead time forecasting mechanism creates a multi-modal order distribution (which was not present in the setting considered by Disney et al., <cit.> as the lead time distribution was assumed to be known beforehand). Thus in our case here, the must be set with
=F^-1(b/b+h)
where F^-1(·) is the cdf of the inventory levels (arbitrary distribution).
Thus the order quantity q_t placed
at the beginning of a period t by the OUT policy is
q_t=S_t-S_t-1+D_t-1 .
Note that by (<ref>), (<ref>) and (<ref>) the quantity of the order placed by the manufacturer to the supplier depends upon the supplier's lead time.
Our main purpose is to find q_t and then to calculate the following bullwhip ratio
BM= q_t/ D_t.
This is one of the typical supply chain performance measurements (see e.g. Towill et al., <cit.>).
The variance of the forecast error over the lead time demand does not depend on t that is σ_t^2=σ^2.
The variance of the forecast error is the expected value of a function of
D_t-n,…, D_t-1, D_t, D_t+1,… and L_t-m-L^+, L_t-m+1-L^+,…, L_t-1-L^+, L_t whose distribution is independent of t. The stationarity of the sequences {D_t}_t=-∞^∞ and {L_t}_t=-∞^∞ and their mutual independence yield the assertion.
Since the variance of the forecast error for the lead time demand is independent of t we get from (<ref>) and (<ref>) that
q_t=D_t^L-D_t-1^L+D_t-1 .
allowing us to derive the exact bullwhip expression.
The measure of the bullwhip effect has the following form
BM = q_t/ D_t
=2σ_L^2/n^2m^2(m(1-ρ^n)+n(1+ρ)/1-ρ-(1+ρ^2)(1-ρ^n)/(1-ρ)^2)+2σ^2_Lμ^2_D/σ^2_D m^2+(2μ_L^2/n^2+2μ_L/n)(1-ρ^n)+1
The proof of Theorem 1 is given in Appendix 1.
Remarks on Theorem 1
The first summand (<ref>) describes the impact of
lead time variability, demand and lead time forecasting and the demand correlation. The second summand shows an impact of lead time forecasting, demand mean and variance and lead time variance on the bullwhip effect. The first two summands are not present in the constant lead time case. The third term gives the amplification of the variance by demand forecasting, the demand correlation and the mean lead time.
If lead times are deterministic that is L_t=L=const. then the bullwhip effect is described by
BM_L=const.=
(2L^2/n^2+2L/n)(1-ρ^n)+1
which coincides with Eq. 5 in Chen et al., <cit.>. Note that Duc et al., <cit.> also obtained the result of Chen et al., <cit.> in a special case and
as an exact value (not a lower bound). Chen et al., <cit.> obtain this expression as a lower bound because they define the error σ_t (see (<ref>)) as the sample variance of D_t^L-D_t^L, indicating that the estimation of the variance of D_t^L-D_t^L amplifies the bullwhip effect.
The following limits exist:
lim_n →∞ BM =1+ 2 μ_D^2 σ_L^2/m^2 σ_D^2,
lim_m →∞ BM =1+(1-ρ^n)(2μ_L^2/n^2+2μ_L/n),
lim_{n,m}→∞ BM =1.
It is easy to see from (<ref>) that bullwhip is strictly decreasing in m, but this is not true for n as there is an odd-even effect in n for negative ρ. When n=1 then the BM is a linear function in ρ as
BM_n → 1=ρ 2 σ _D^4 (σ _L^2-m (m μ _L (μ _L+1)+σ _L^2))/m^2+2 μ _D^2 σ _D^2 σ _L^2+m σ _D^4 (2 m μ _L (μ _L+1)+2 σ _L^2+m)/m^2,
which always has a negative gradient in ρ (unless μ_L=0 and m=1, in which case the gradient is zero).
For i.i.d. demand the following bullwhip measure exists
BM_iid =1+2 μ _L^2/n^2+2 μ _L/n+2 μ _D^2 σ _L^2/m^2 σ _D^2-2 σ _L^2/m^2 n^2+2 σ _L^2/m^2 n+2 σ _L^2/m n^2
=1+2 μ _D^2 σ _L^2/m^2 σ _D^2+2 σ _L^2 (m+n-1)/m^2 n^2+2 μ _L (μ _L+n)/n^2
which is strictly decreasing in n and m and the result is consistent with Michna and Nielsen <cit.>. The derivative of the bullwhip measure in (<ref>) at ρ =0 is
dBM /dρ|_ρ = 0=4 (n-1) σ _L^2/m^2 n^2
which is always positive when n>1.
As ρ→ 1 then the following expression defines the bullwhip measure
BM_ρ→ 1=1+2 σ _L^2 (μ _D^2+σ _D^2)/m^2 σ _D^2
which is independent of n and decreasing in m.
Notice BM_iid≥ BM_ρ→ 1 if
n≤σ_L^2+m^2μ_L +√((σ_L^2+m^2μ_L)^2+4σ_L^2(σ_L^2(m-1)+m^2μ_L^2))/2σ_L^2
holds.
Eq. (<ref>) together with (<ref>) provides a sufficient (but not necessary) condition for the presence of at least one stationary point in the region 0<ρ<1 if n≥ 2 (see Fig. <ref>). Notice that if the lead time is a constant then BM_iid>BM_ρ→ 1.
If ρ→ -1 then
BM_ρ→ -1=1+2 μ _D^2 σ _L^2/m^2 σ _D^2-(2 m-1) ((-1)^n-1) σ _L^2/m^2 n^2-2 ((-1)^n-1) μ _L (μ _L+n)/n^2
which is decreasing in m, but the odd-even impact of n can be clearly seen. When n is even then
BM_ρ→ -1, even n=1+2 μ _D^2 σ _L^2/m^2 σ _D^2 .
Numerical investigations (see Figs. <ref>, <ref>, <ref>, and <ref>) seem to suggest that there are no stationary points in the region -1<ρ<0 when n is even, but we remain unable to prove so. However this is congruent with our previous results that BM_iid>BM_ρ→ -1, even n and dBM/dρ|_ρ = 0>0.
When n is odd then
BM_ρ→ -1, odd n=1+2 μ _D^2 σ _L^2/m^2 σ _D^2+2 (2 m-1) σ _L^2/m^2 n^2+4 μ _L (μ _L+n)/n^2 .
Finally, BM_iid≤ BM_ρ→ -1, odd n if
m≥σ_L√(σ_L^2+4nμ_L(μ_L+n))-σ_L^2/2μ_L(μ_L+n) .
When (<ref>) holds and n>1 there must be at least one stationary point between -1<ρ<0 because of the positive derivative at ρ=0, see (<ref>). Note this is again a sufficient, but not a necessary condition. Extensive numerical investigations (see Figs. <ref> and <ref>) suggest that only one stationary point exists in this area though we can not prove it. Moreover
for large m the derivative at ρ=0 is almost zero, see (<ref>) and Figs. <ref> to <ref>.
§ NUMERICAL INVESTIGATIONS
Let us further investigate the influence of the demand correlation on the bullwhip effect by analyzing some concrete numerical examples. We plot the bullwhip effect measure as a function of the demand correlation parameter ρ. Fixing
μ_D=20, σ_D=4, μ_L=10, σ_L=5 we depict the bullwhip measure in four different scenarios. That is when: n and m are small, one of them
is small and the other is large and both are large. If n is small, we need to distinguish
two further cases; that is, whether n is even or odd.
Thus if n=5 and m=2 (small and odd n) the bullwhip measure has a minimum at ρ≈ -0.5 and ρ= 1 and a maximum at ρ= -1 and
ρ≈ 0.7 (see Fig. <ref>). The bullwhip measure behavior for ρ= -1 and ρ= 1 can be predicted by taking the limit as the AR(1) model is well-defined when -1<ρ<1. In Duc et al., <cit.> the minimal value of BM is attained for ρ near -0.6 or -0.7 and the maximal value of BM is for ρ around
0.6 or 1. Their results are very close to ours if n and m are small and n is odd (see Fig. <ref>) because their model does not predict the lead time, corresponding to the small values of n and m in our model.
For n=6 and m=2 (small and even n) we observe a different behavior. Specifically, the smallest value of the bullwhip effect is attained for ρ= -1 and the largest for ρ≈ 0.75 (see Fig. <ref>). This concurs with Kahn <cit.> who revealed positively correlated demands result in the bullwhip effect. We also notice that the bullwhip effect is very large if n and m are small.
The situation changes if n is large and m is small (see Figs. <ref> and <ref>). Then the bullwhip measure is almost an increasing function of the demand correlation
except for odd n and ρ close to -1 where we observe a minimum. Moreover bullwhip
increases quite slowly in the region of -0.8<ρ<0.5. The odd-even
effects in n are now much less noticeable. These observations are consistent with our theoretical analysis in the previous section.
As m, the number of periods used in the moving average forecast of the lead time increases, the bullwhip effect becomes independent of the demand correlation ρ regardless
of the number of periods used in the moving average forecast of demand, n. That is, bullwhip remains almost constant except near ρ={-1,1}. Figs. <ref> to <ref> confirm this independence for the cases when n=5, n=6, n=21, n=22 and m=20. This is caused by the first summand of (<ref>) which vanishes as m→∞ or n→∞ and the third summand is rather insensitive to ρ.
For ρ close to -1 or 1 the bullwhip effect can dramatically increase or decrease. Moreover much less bullwhip is generated with large values of n and m. Reduced bullwhip with large m is congruent with the results of Disney et al., <cit.> and Wang and Disney <cit.>.
§ CONCLUSIONS AND FURTHER RESEARCH OPPORTUNITIES
We quantified the bullwhip effect when demands and lead times must be forecasted. Demand and lead time forecasting are necessary when placing an order if demands and lead times are stochastic. We have confirmed, extended and sharpened the conclusion of Michna and Nielsen <cit.> that lead time forecasting is a major cause of the bullwhip effect. We assumed that demands constitute a first order autoregressive process and we obtained quantitative results which link bullwhip and the demand correlation when demands and lead times are to be predicted separately. We conclude that how one goes about forecasting demand and lead time is important as it can cause significant amounts of bullwhip. Moreover the dependence of the bullwhip measure on the demand correlation parameter is different according to the forecasting parameters used to make lead time and demand predictions. In particular, an even number of data points in the moving average demand forecast can significantly reduce bullwhip when demand has a strong negative correlation.
Future bullwhip research could be focused on the impact of lead time forecasting under the assumption that lead times are correlated, either temporally, or with other system states such as customer demand. For example if large orders lead to long lead times, there is a correlation between the lead time and the order size and this dependence should be captured somehow. This seems to be difficult to quantify analytically.
Other opportunities lie in studying the impact of different forecasting methods for lead times and demands (see Zhang <cit.> for the case of demand forecasting). Another big challenge is to quantify bullwhip in the presence of unrealized previous orders when placing an order. More precisely, forecasting the most recent lead times when some orders are not yet received will distort the lead time distribution and have an impact on bullwhip.
The quantification of this issue seems to be a difficult task, but it will become important when the lead times are temporally correlated.
An important challenge is the investigation of the variance amplification of orders and the variance amplification of inventory levels simultaneously because an improper focus on bullwhip reduction can amplify the variability of inventory levels (see Chen and Disney <cit.>, Devika et al., <cit.>, Disney at el. <cit.> and Wang and Disney <cit.>) which can be as harmful as the bullwhip. Moreover, the POUT replenishment policy, Disney and Towill <cit.>, should be investigated under the assumption of lead time forecasting.
§ APPENDIX
Proof of Th. <ref>.
We apply the law of total variance to find variance of q_t. Namely, let us put
L=(L_t-1-L^+, L_t-2-L^+,…, L_t-1-m-L^+)
and then
q_t=((q_t|L))+((q_t|L)) .
Using (<ref>) it can be seen that
q_t = D_t^L-D_t-1^L+D_t-1
= L_tD_t-L_t-1D_t-1+D_t-1
revealing that q_t= D_t=μ_D. Moreover from the second expression for q_t it follows that
(q_t|L)=(L_t-1-L^+-L_t-1-m-L^+)μ_D/m+μ_D
which gives
((q_t|L))=2σ^2_Lμ^2_D/m^2 .
To calculate the conditional variance of q_t we express it as a function of D_t-1-n and the error terms ϵ_t-n, ϵ_t-n+1, …, ϵ_t-1 which are mutually independent. Thus by (<ref>) and (<ref>) we get
q_t
= (L_t/n+1)D_t-1+L_t-1-L^+-L_t-1-m-L^+/nm∑_k=2^n D_k-2
-L_t-1/nD_t-1-n
= (L_t/n+1)μ_D(1-ρ^n)+(L_t-1-L^+-L_t-1-m-L^+)μ_D/nm(n-1-ρ(1-ρ^n-1)/1-ρ)
+[(L_t/n+1)ρ^n+(L_t-1-L^+-L_t-1-m-L^+)ρ(1-ρ^n-1)/nm(1-ρ)-L_t-1/n]D_t-1-n
+∑_k=1^n[(L_t/n+1)ρ^k-1+(L_t-1-L^+-L_t-1-m-L^+)(1-ρ^k-1)/nm(1-ρ)]ϵ_t-k
which gives
(q_t|L)=σ^2_D C^2_1+σ^2∑_k=1^n C^2_2, k ,
where
C_1=(L_t/n+1)ρ^n+(L_t-1-L^+-L_t-1-m-L^+)ρ(1-ρ^n-1)/nm(1-ρ)-L_t-1/n
and
C_2, k=(L_t/n+1)ρ^k-1+(L_t-1-L^+-L_t-1-m-L^+)(1-ρ^k-1)/nm(1-ρ) .
Thus we get
C_1=(μ_L/n+1)ρ^n-μ_L/n
and
C_2, k=(μ_L/n+1)ρ^k-1 .
To calculate (q_t|L) it is necessary to find C^2_1 and C^2_2, k by (<ref>). We compute them finding variance and adding the square of the first moment. Thus, to obtain the variance of C_1 and C_2, k, we express them as a sum of independent random variables that is
C_1=ρ^n-1/nm∑_k=2^mL_t-k-L^++ρ(1-ρ^n)/(1-ρ)nmL_t-1-L^+-1-ρ^n/(1-ρ)nmL_t-1-m-L^++ρ^n
and
C_2, k=ρ^k-1/nm∑_k=2^mL_t-k-L^++1-ρ^k/(1-ρ)nmL_t-1-L^+-1-ρ^k-1/(1-ρ)nmL_t-1-m-L^++ρ^k-1 .
Hence we obtain
C_1=(1-ρ^n)^2σ^2_L/n^2m^2(m+2ρ/(1-ρ)^2)
and
C_2, k=σ^2_L/n^2m^2[ρ^2(k-1)(m-1)+(1-ρ^k/1-ρ)^2+(1-ρ^k-1/1-ρ)^2] .
So we get
C^2_1=(1-ρ^n)^2σ^2_L/n^2m^2(m+2ρ/(1-ρ)^2)
+[(μ_L/n+1)ρ^n-μ_L/n]^2
and
C^2_2, k = (σ^2_L(m-1)/n^2m^2+(μ_L/n+1)^2+
σ^2_L(ρ^2+1)/n^2m^2(1-ρ)^2)ρ^2(k-1)
-2σ^2_L(ρ+1)/n^2m^2(1-ρ)^2ρ^k-1+2σ^2_L/n^2m^2(1-ρ)^2 .
Summing the last expression we obtain
∑_k=1^n C^2_2, k = (σ^2_L(m-1)/n^2m^2+(μ_L/n+1)^2+
σ^2_L(ρ^2+1)/n^2m^2(1-ρ)^2)1-ρ^2n/1-ρ^2
-2σ^2_L(ρ+1)/n^2m^2(1-ρ)^21-ρ^n/1-ρ+2σ^2_L/nm^2(1-ρ)^2 .
Plugging (<ref>), (<ref>), (<ref>), (<ref>) into (<ref>) yields the formula from the assertion after a simple algebra.
§.§ Acknowledgments
The first author acknowledges support by the National Science Centre grant 2012/07/B//HS4/00702.
99
ba:bo:sy:al:15
Babai, M.Z., Boylan, J.E., Syntetos, A.A., Ali, M.M., 2016. Reduction of the value
of information sharing as demand becomes strongly auto-correlated. International
Journal of Production Economics, 181, 130–135.
ba:ha:ch:86
Bagchi, U., Hayya, J., Chu, C., 1986. The effect of leadtime
variability: The case of independent demand. Journal of
Operations Management 6, 159–177.
bi:ro:si:bl:14
Bischak, D.P., Robb, D.J., Silver, E.A., Blackburn, J.D., 2014. Analysis and management of periodic review order-up-to level inventory systems with order crossover. Production and Operations Management 23(5), 762–772.
bo:di:la:ho:08
Boute R.N., Disney, S.M., Lambrecht, M.R., Van Houdt, B., 2008. A win-win solution for the bullwhip problem. Production Planning and Control 19(7), 702–711.
bo:di:la:ho:09
Boute, R.N., Disney, S.M., Lambrecht, M.R.,Van Houdt, B., 2009. Designing replenishment rules in a two-echelon supply chain with a flexible or an inflexible capacity strategy. International Journal of Production Economics 119, 187–198.
bo:di:la:ho:14
Boute, R.N., Disney, S.M., Lambrecht, M.R., Van Houdt, B., 2014. Coordinating lead-times and safety stocks under auto-correlated demand. European Journal of Operational Research 232(1), 52–63.
br:ro:05
Bradley, J.R., Robinson, L.W., 2005.
Improved base-stock approximations for independent stochastic lead times with order crossover.
Manufacturing and Service Operations Management 7(4), 319–329.
bu:pa:pa:po:08
Buchmeister, B., Pavlinjek, J., Palcic, I., Polajnar, A., 2008. Bullwhip effect problem in supply chains. Advances in Production Engineering and Management 3(1), 45–55.
ch:he:10
Chaharsooghi, S.K., Heydari, J., 2010.
LT variance or LT mean reduction in supply chain management: Which one
has a higher impact on SC performance? International Journal of Production Economics 124, 475–481.
ch:ha:07
Chatfield, D.C., Hayya, J.C., 2007. All-zero forecasts for lumpy demand: a factorial study.
International Journal of Production Research 45(4), 935–950.
ch:ki:ha:ha:04
Chatfield, D.C., Kim, J.G., Harrison, T.P., Hayya, J.C., 2004. The
bullwhip effect - Impact of stochastic lead time, information
quality, and information sharing: a simulation study. Production and Operations
Management 13(4), 340–353.
ch:di:07
Chen, Y.F., Disney, S.M., 2007. The myopic order-up-to policy with a feedback controller. International Journal of Production Research 45(2), pp351–368.
ch:dr:ry:si:00a
Chen, F., Drezner, Z., Ryan, J.K., Simchi-Levi, D., 2000. Quantifying
the bullwhip effect in a simple supply chain. Management Science 46(3), 436–443.
ch:dr:ry:si:00b
Chen, F., Drezner, Z., Ryan, J.K., Simchi-Levi, D., 2000. The impact
of exponential smoothing forecasts on the bullwhip effect. Naval Research Logistics
47(4), 269–286.
de:di:la:to:03
Dejonckheere, J, Disney, S.M., Lambrecht M.R., Towill, D.R., 2003.
Measuring and avoiding the bullwhip effect: a control theoretic approach.
European Journal of Operational Research 147(3), 567–590.
de:di:la:to:04
Dejonckheere, J, Disney, S.M., Lambrecht M.R., Towill, D.R., 2004.
The impact of information enrichment on the Bullwhip effect in supply chains: A control engineering perspective. European Journal of Operational Research 153(3), 727–750.
de:ja:ha:kh:14
Devika, K., Jafarian, A., Hassanzadeh, A., Khodaverdi, R., 2015.
Optimizing of bullwhip effect and net stock amplification in three-echelon supply chains using evolutionary multi-objective metaheuristics. Annals of Operations Research 242(2), 457–-487.
di:ma:wa:wa:16
Disney, S.M., Maltz, A., Wang, X., Warburton, R.D.H., 2016. Inventory management for stochastic lead times with order crossovers. European Journal of Operational Research 248, 473–486.
di:to:03
Disney, S.M., Towill, D.R., 2003. On the bullwhip and inventory variance produced by an ordering policy. Omega: The International Journal of Management Science 31, 157–167.
du:lu:ki:08
Duc, T.T.H., Luong, H.T., Kim, Y.D., 2008. A measure of the bullwhip effect in supply chains with stochastic lead time. The International Journal of Advanced Manufacturing Technology 38(11-12), 1201–1212.
fo:58
Forrester, J.W., 1958. Industrial dynamics - a major break-through
for decision-making. Harvard Business Review 36(4), 37–66.
ge:di:to:06
Geary, S., Disney, S.M., Towill, D.R., 2006. On bullwhip in supply chains–historical review, present practice and expected future impact. International Journal of Production Economics 101, 2–18.
ha:ba:ki:su:08
Hayya, J.C., Bagchi, U., Kim, J.G., Sun, D., 2008.
On static stochastic order crossover. International Journal of Production Economics 114(1), 404–413.
ha:ha:he:11
Hayya, J.C., Harrison, T.P., He, X.J., 2011. The impact of stochastic lead time reduction on
inventory cost under order crossover. European Journal of Operational Research 211(2), 274–281.
ha:ki:di:ha:ch:06
Hayya, J.C., Kim, J.G., Disney, S.M., Harrison, T.P., Chatfield, D., 2006.
Estimation in supply chain inventory management. International Journal of Production Research 44(7),
1313–1330.
ha:xu:ra:he:95
Hayya, J.C., Xu, S.H., Ramasesh, R.V., He, X.X., 1995.
Order crossover in inventory systems. Stochastic Models 11(2), 279–309.
he:xu:or:ha:98
He, X.X., Xu, S.H., Ord, J.K., Hayya, J.C., 1998.
An inventory model with order crossover.
Operations Research 46(3), 112-119.
hu:pa:14
Hum, S.H., Parlar, M., 2014.
Measurement and optimisation of supply chain responsiveness.
IIE Transactions 46(1), 1-22.
ka:87
Kahn, J.A., 1987. Inventories and volatility of production. American Economic Review 77, 667–679.
ka:70
Kaplan, R.S., 1970. A dynamic inventory model with stochastic lead times. Management Science 16(7), 491–507.
ki:ch:ha:ha:06
Kim, J.G., Chatfield, D., Harrison, T.P., Hayya, J.C., 2006. Quantifying the bullwhip effect in a supply chain with stochastic lead time. European Journal of Operational Research 173(2), 617–636.
ki:ry:03
Kim, H.K., Ryan, J.K., 2003. The cost impact of using simple forecasting techniques in a supply chain. Navel Research Logistics 50, 388–411.
le:pa:wh:97a
Lee, H.L., Padmanabhan, V., Whang, S., 1997. The bullwhip effect
in supply chains. Sloan Management Review 38(3), 93–102.
le:pa:wh:97b
Lee, H.L., Padmanabhan, V., Whang, S., 1997. Information distortion in a supply chain: the bullwhip effect. Management Science 43(3), 546–558.
le:so:ta:00
Lee, H.L., So, K.C., Tang, C.S., 2000. The value of information sharing in a two-level supply chain. Management Science 46(5), 626–643.
lu:07
Luong, H.T., 2007. Measure of bullwhip effect in supply chains with
autoregressive demand process. 2007. European Journal of Operational Research 180,
1086–1097.
lu:07a
Luong, H.T., Pien, N.H., 2007. Measure of bullwhip effect in supply chains: the
case of high order autoregressive demand process. European Journal of Operational
Research 183, 197–209.
mi:23
Mitchell, T., 1923. Competitive illusion as a cause of business cycles, Quarterly Journal of Economics 38, 631–652.
mi:ni:13
Michna, Z., Nielsen, P., 2013. The impact of lead time forecasting on the bullwhip effect. Preprint available online at http://arxiv.org/abs/1309.7374.
mi:ni:ni:13
Michna, Z., Nielsen, I.E., Nielsen, P., 2013.
The bullwhip effect in supply chains with stochastic lead times. Mathematical Economics 9, 71–88.
mi:ni:ni:14
Michna, Z., Nielsen, P., Nielsen, I.E., 2014.
The impact of stochastic lead times on the bullwhip effect. Preprint available online at http://arxiv.org/abs/1411.4289.
ri:06
Riezebos, J., 2006. Inventory order crossovers. International Journal of Production
Economics 104, 666-675.
ro:br:th:01
Robinson, L.W., Bradley, J.R., Thomas, L.J., 2001. Consequences of order crossover under order-up-to inventory policies. Manufacturing and Service Operations Management 3(3), 175–188.
si:pi:pe:98
Silver, E.A., Pyke, D.F., Peterson, R., 1998. Inventory management and production planning and scheduling. Third Edition. John Wiley & Sons, New York.
so:zh:03
So, K.C., Zheng, X., 2003. Impact of supplier's lead time and forecast demand updating on retailer's order quantity variability in a two-level supply chain. International Journal of Production Economics 86, 169–179.
so:94a
Song, J., 1994. The effect of leadtime uncertainty in simple
stochastic inventory model. Management Science 40, 603–613.
so:94b
Song, J., 1994. Understanding the leadtime effects in
stochastic inventory systems with discounted costs. Operations
Research Letters 15, 85–93.
to:zh:di:07
Towill, D.R., Zhou, L., Disney, S.M., 2007.
Reducing the bullwhip effect: Looking through the
appropriate lens. International Journal of Production Economics 108, 444–453.
ty:on:97
Tyworth, J.E., O’Neill,L., 1997. Robustness of the normal approximation of lead-time demand in a distribution setting. Naval Research Logistics 44(2), 165–186.
wa:di:16
Wang, X., Disney, S.M., 2016.
The bullwhip effect: progress, trends and directions. European Journal of Operational Research 250, 691–-701.
wa:di:15
Wang, X., Disney, S.M., 2017.
Mitigating variance amplifications under stochastic lead-time: The proportional control approach. European Journal of Operational Research 256(1), 151–162.
zh:04
Zhang, X., 2004. The impact of forecasting methods on the bullwhip effect. International Journal of Production Economics 88, 15–27.
zi:86
Zipkin, P., 1986. Stochastic leadtimes in continuous-time
inventory models. Naval Research Logistics Quarterly 33, 763–774.
|
http://arxiv.org/abs/1701.07488v1 | 20170125211937 | Joint Power Allocation and Beamforming for Energy-Efficient Two-Way Multi-Relay Communications | [
"Zhichao Sheng",
"Hoang D. Tuan",
"Trung Q. Duong",
"H. Vincent Poor"
] | cs.IT | [
"cs.IT",
"math.IT"
] |
Joint Power Allocation and Beamforming for Energy-Efficient Two-Way Multi-Relay Communications
This work was supported in part by the Australian Research Council’s Discovery Projects under Project DP130104617, in part by the U.K. Royal Academy of Engineering Research Fellowship under Grant RF14151422, and in part by
the U.S. National Science Foundation under Grants CCF-1420575 and CNS-1456793.
Zhichao Sheng, Hoang D. Tuan, Trung Q. Duong and H. Vincent Poor
Zhichao Sheng and Hoang D. Tuan are with the Faculty of Engineering and Information Technology, University of Technology Sydney, Broadway, NSW 2007, Australia (email: kebon22@163.com, Tuan.Hoang@uts.edu.au)
Trung Q. Duong is with Queen's University Belfast, Belfast BT7 1NN, UK (email: trung.q.duong@qub.ac.uk)
H. Vincent Poor is with the Department of Electrical Engineering, Princeton University, Princeton, NJ 08544, USA (e-mail: poor@princeton.edu)
December 30, 2023
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
This paper considers the joint design of user power allocation and relay beamforming in
relaying communications, in which multiple pairs of single-antenna users exchange information
with each other via multiple-antenna relays in two time slots. All users transmit their signals to the relays
in the first time slot while the relays broadcast the beamformed signals to all users in the second time slot.
The aim is to maximize the system's energy efficiency (EE) subject to
quality-of-service (QoS) constraints in terms of exchange throughput requirements. The QoS constraints
are nonconvex with many nonlinear cross-terms, so finding a feasible point is already computationally challenging. The sum throughput appears in the numerator while the total consumption power
appears in the denominator of the EE objective function. The former is a nonconcave function and the latter
is a nonconvex function, making fractional programming useless for EE optimization.
Nevertheless, efficient iterations of low complexity to obtain its optimized solutions are
developed. The performances of the multiple-user and multiple-relay networks under various scenarios are evaluated to show
the merit of the paper development.
Two-way relaying, information exchange (IE), energy efficiency (EE), quality-of-service (QoS),
relay beamforming, power allocation, joint optimization, path-following algorithms.
§ INTRODUCTION
Two-way relaying (TWR) <cit.> has been the focus of considerable research interest
in recent years due to its potential in offering higher information exchange throughput for cognitive communications such as device-to-device (D2D) and machine-to-machine (M2M) communications <cit.>. Unlike
the conventional one-way relaying, which needs four time slots for information exchange between a pair of users
(UEs), TWR needs just two time slots for this exchange <cit.>. In the first time slot, known as the multiple access (MAC) phase, all UEs simultaneously transmit their signals to the relays. In the second time slot, also known as the broadcast (BC) phase, the relays broadcast the beamformed signals to the all UEs.
Offering double fast communication, TWR obviously suffers from double
multi-channel interference as compared to one-way relaying <cit.>. Both optimal control of UEs' transmit power
and TWR beamforming are thus very important in exploring the spectral efficiency of TWR.
There are various scenarios of TWR considered in the literature. The most popular scenario is
single-antenna relays serving a pair of single-antenna UEs <cit.>.
The typical problems are to design the TWR weights to either maximize the throughput
or minimize the relay power subject to signal-to-interference-plus-noise ratio (SINR) constraints at UEs.
A branch-and-bound (BB) algorithm of global optimization <cit.> was used in <cit.>
for sum throughput maximization. Its computational complexity is already very high in very low-dimensional problems.
Semi-definite relaxation of high computational complexity
followed by bisection used in <cit.> works strictly under a single total relay power constraint.
Furthermore, the scenario of single-antenna relays serving multiple pairs of UEs was addressed in
<cit.> by a polyblock algorithm of global optimization
<cit.> and in <cit.> by local linearization based iteration. The mentioned semi-definite relaxation
was also used in <cit.> in designing TWR beamformer for the scenario of
a multi-antenna relay serving multiple pairs of UEs. It should be realized that most of relay beamformer optimization
problems considered in all these works are not more difficult computationally than their one-way relaying counterparts,
which have been efficiently solved in <cit.>.
The fixed power allocation to UEs does not only miss the opportunity of power distribution within a network
but can also potentially increase interference to UEs of other networks <cit.>. The joint design of UE power allocation and TWR weights for single antenna relays serving a pair of single antenna UEs
to maximize the minimum SINR was considered in <cit.>. Under the strict assumption that
the complex channel gains from UEs to the relays are the same as those from the relays to the UEs,
its design is divided into two steps. The first step is to optimize the beamformer weights with UEs'
fixed power by sequential second-order convex cone programming (SOCP). The second step
performs an exhaustive grid search for UE power allocation. Joint optimization of UE
precoding and relay beamforming for a multi-antenna relay serving a pair of multi-antenna UEs
could be successfully addressed only recently in <cit.>. An efficient computation for joint UE power
allocation and TWR beamforming to maximize the worst UE throughput for multiple antenna relays
serving multiple pairs of single antenna UEs was proposed
in <cit.>. The reader is also referred to <cit.> for joint UE and relay power allocation
in MIMO OFDM system of one relay and one pair of UEs.
Meanwhile, the aforementioned classical spectral efficiency (SE) in terms of high throughput is now only
one among multiple driving forces for the development of the next generation communication networks (5G) <cit.>.
The energy consumption of communication systems has become sizable, raising environmental and economic
concerns <cit.>. Particularly, the network energy efficiency (EE) in terms of
the ratio of the sum throughput and the total power consumption, which counts not only the transmission power but also the
drain efficiency of power amplifiers, circuit power and other power factors in supporting the network's activities,
is comprehensively pushed forward in 5G to address these concerns <cit.>. EE in single-antenna TWR
has been considered in <cit.> for single-antenna OFDMA in assisting multiple pairs of
single-antenna UEs and in <cit.> and <cit.> for multi-antenna relays in assisting a pair of multi-antenna UEs. Again, the
main tool for computational solution in these works is semi-definite relaxation, which not only
significantly increases the problem dimension but also performs unpredictably <cit.>. Also, the resultant
Dinkelbach's iteration of fractional programming invokes a logarithmic function optimization, which is convex but still
computationally difficult with no available algorithm of polynomial complexity.
The above analysis of the state-of-the-art TWR motivates us to consider the joint design of single-antenna UE power
allocation and TWR beamformers in a TWR network to maximize its EE subject to UE QoS constraints.
We emphasize that both sum throughput maximization (for spectral efficiency) and EE maximization are meaningful
only in the context of UE QoS satisfaction, without which they will cause the UE service discrimination.
Unfortunately, to our best knowledge such UE QoS constraints were not addressed whenever they are nonconvex <cit.>. The nonconvexity of these QoS constraints implies that even finding their
feasible points is already computationally difficult. QoS constraints in terms of UEs' exchange throughput requirements
are much more difficult than that in terms of individual UE throughput requirements because the former cannot be
expressed in terms of individual SINR constraints as the latter. To address the EE maximization problem, we first
develop a new computational method for UE exchange throughput requirement feasibility, which invokes only
simple convex quadratic optimization. A new path-following computational procedure for computational solutions
of the EE maximization problem is then proposed.
The rest of the paper is organized as follows. Section
<ref> formulates two optimization problems of EE maximization and UE QoS optimization.
Two path-following algorithms are developed in Section <ref> and <ref> for their computation.
In contrast to fractional programming, these algorithms
invoke only a simple convex quadratic optimization of low computational complexity at each iteration.
Section <ref> provides simulation results to verify the
performance of these algorithms. Finally, concluding remarks are
given in Section <ref>
Notation. Vectors and matrices are represented by boldfaced lowercase and uppercase, respectively.
x(n) is the nth entry of vector x, while X(n,.) and X(n,m) are the nth row and
(n,m)th entry of a matrix X. ⟨x,
y⟩=x^Hy is the inner product between
vectors x and y. ||.|| is either the Euclidean vector squared norm or
the Frobenius matrix squared norm, and ℝ^N_+={x∈ℝ^N : x(n)≥ 0,n=1,2,…,N}. X is the trace of matrix X
and [X_m]_m=1,…,M is a block diagonal matrix with diagonal blocks X_m.
Lastly, x∼𝒞𝒩(x̅,R_x) means x is a vector of Gaussian random variables with mean x̅ and covariance R_x.
§ TWO-WAY RELAY NETWORKS WITH MULTIPLE MIMO RELAYS AND MULTIPLE SINGLE-ANTENNA USERS
Fig. <ref> illustrates a TWR network in which K pairs of single-antenna UEs exchange information with each other. Namely the kth UE (UE k) and the (K+k)th UE (UE K+k), with k=1,…,K, exchange information with
each other via
M relays designated as relay m, m=1,…,M, equipped with N_R antennas.
Denote by s=(s_1,…,s_2K)^T ∈ℂ^2K the vector of information symbols transmitted by the UEs, whose
entries are independent and have unit energy, i.e., [ss^H]=I_2K, where 𝐈_2K is
the identity matrix of size (2K)× (2K).
Let h_ℓ,m∈ℂ^N_R be the vector of channels from UE ℓ to relay m. The received signal at relay m is
r_m=∑_ℓ=1^2K√(p_ℓ)h_ℓ,m s_ℓ+n_R,m,
where n_R,m∼𝒞𝒩(0,σ_R^2I_N_R) is the background noise, and p=(p_1,…,p_2K)^T ∈ℝ_+^2K represents the powers allocated to the UEs.
Relay m performs linear processing on the received signal by applying the beamforming matrix
W_m ∈^N_R × N_R. The beamformed signals are
r_m,b=W_mr_m=∑_ℓ=1^2K√(p_ℓ)W_mh_ℓ,m s_ℓ+W_mn_R,m, m=1,…, M.
The transmit power at relay m is calculated as
[||r_m,b||^2] = ∑_ℓ=1^2K p_ℓ ||W_mh_ℓ,m||^2+
σ_R^2||W_m||^2.
Relay m transmits the beamformed signal r_m,b to the UEs.
Let g_m,k∈ℂ^N_R be the vector of channels from the relay m to UE k.
The received signal at UE k is given by
y_k = ∑_m=1^Mg_m,k^Tr_m,b+n_k
= ∑_m=1^Mg_m,k^T
(∑_ℓ=1^2K√(p_ℓ)W_mh_ℓ,m s_ℓ+W_mn_R,m)+n_k,
where n_k∼𝒞𝒩(0,σ_k^2) is the background noise, which can be written as
[ y_k = √(p_χ(k))∑_m=1^Mg_m,k^T
W_mh_χ(k),m s_χ(k)_ desired signal +
√(p_k)∑_m=1^Mg_m,k^T
W_mh_k,m s_k_ self-interference; + ∑_m=1^Mg_m,k^T(∑_ℓ=1, ℓ≠ k, χ(k)^2K√(p_ℓ)W_mh_ℓ,m s_ℓ_ inter-pair interference+
W_mn_R,m)+n_k. ]
In the above equation, (k, χ(k)) is a pair of UEs that exchange information with each other, so
χ(k)=[ K+k k=1,…, K
k-K k=K+1,…, 2K ]
Assuming that the channel state information (CSI) of the forward and backward channels and the beamforming matrices
is available, UE k effectively subtracts the self-interference term in (<ref>) to have the SINR:
γ_k(p, W) = p_χ(k)|∑_m=1^Mf_m,k^H
W_mh_χ(k),m|^2/∑_ℓ=1, ℓ≠ k,χ(k)^2K p_ℓ|∑_m=1^Mf_m,k^H W_mh_ℓ,m|^2
+σ_R^2∑_m=1^M||f_m,k^H W_m||^2+σ^2_k,
where f_m,k=g_m,k^*, so f_m,k^H=g_m,k^T.
Under the definitions
[ _k,ℓ(W)≜∑_m=1^Mf_m,k^H W_mh_ℓ,m,; _k(W)≜[ f_1,k^H W_1 f_2,k^H W_2 ... f_M,k^H W_M ], ]
it follows that
γ_k(p, W) = p_χ(k)|_k,χ(k)(W)|^2/
[∑_ℓ=1, ℓ≠ k,χ(k)^2K p_ℓ|_k,ℓ(W)|^2+σ_R^2||_k(W)||^2 +σ^2_k].
In TWR networks, the UEs exchange information in bi-direction fashion in one time slot.
Thus, the throughput at the kth UE pair
is defined by the following function of beamforming matrix W and power allocation vector p:
R_k(p, W) = ln ( 1+γ_k(p, W))+ln (1+γ_K+k(p, W)), k=1,…,K.
Accordingly, the problem of maximizing the network EE subject to UE QoS constraints is
formulated as:
max_W∈ℂ^N× N, p∈ℝ_+^2K F_EE(𝐰,𝐩)≜∑_k=1^K[ln (1+ γ_k(p, W))+ln (1+γ_K+k(p, W))]/ζ(P^U_ sum(𝐩)+P^R_ sum(𝐩,
𝐖))+MP^ R+2KP^ U
0≤ p_k ≤ P_k^U,max, k=1,…,2K,
∑_k=1^2K p_k ≤ P_sum^U,max,
∑_ℓ=1^2K p_ℓ||W_mh_ℓ,m||^2+
σ_R^2||W_m||^2 ≤ P_m^A,max, m=1,...,M,
∑_m=1^M(∑_ℓ=1^2K p_ℓ||W_mh_ℓ,m||^2+
σ_R^2||W_m||^2)≤ P_sum^R,max,
R_k(p, W)≥ r_k, k=1,⋯, K,
where ζ, P^ R and P^ U are the reciprocal of drain efficiency of power amplifier,
the circuit powers of the relay and UE, respectively. P^ R=N_R P_r and P_r is the circuit power for each antenna in relay. (<ref>) is the exchange throughput QoS requirement for each pair of UEs.
Constraints (<ref>) and (<ref>) cap the transmit power of each UE k and each
relay m at predefined values P_k^U,max and P_m^A,max, respectively. On the other hand, constraints
(<ref>) and (<ref>) ensure that the total transmit power of UEs and the total transmit
power of the relays not exceed the allowed power budgets P_sum^U,max and P_sum^R,max,
respectively.
Note that (<ref>) is a very difficult nonconvex optimization problem because
the power constraints (<ref>) and (<ref>), the exchange throughput QoS
constraints (<ref>), and the objective function (<ref>) are nonconvex.
Moreover, the exchange throughput QoS constraints (<ref>) are much harder than the
typical individual throughput QoS constraints
ln(1+γ_k(p,W))≥r̃_min, k=1,…, 2K,
which are equivalent to the computationally easier SINR constraints
γ_k(p,W)≥ e^r̃_min-1, k=1,…, 2K.
Note that even finding feasible point of the EE problem (<ref>) is already difficult as
it must be based on the following UEs QoS optimization problem
max_W∈ℂ^N× N, p∈ℝ_+^2K φ(p,W)≜min_k=1,… ,K [ln (1+ γ_k(p, W))^1/r_k+ln (1+γ_K+k(p, W))^1/r_k]
(<ref>), (<ref>), (<ref>), (<ref>),
which is still highly nonconvex because
the objective function in (<ref>) is nonsmooth and nonconcave while the joint power constraints
(<ref>) and (<ref>)
in (<ref>) are nonconvex. Only a particular case of r_k≡ r_min was addressed in
<cit.>, under which (<ref>) is then equivalent to the SINR multiplicative maximization
max_W∈ℂ^N× N, p∈ℝ_+^2K min_k=1,… ,K [(1+γ_k(p, W))(1+γ_K+k(p, W)]
(<ref>), (<ref>), (<ref>), (<ref>),
which can be solve by d.c. (difference of two convex functions) iterations <cit.>.
To the authors' best knowledge there is no available computation for (<ref>) in general. The next
section is devoted to its solution.
§ MAXIMIN EXCHANGE THROUGHPUT OPTIMIZATION
To address (<ref>), our first major step is to transform the nonconvex constraints (<ref>) and (<ref>) to convex ones through variable change as follows.
Following <cit.>, make the variable change
β_k=1/p_k^2≥ 0, k=1,…,2K.
For α>0 and β>0, define the functions
[ Ψ_k,ℓ(W, α, β)≜|_k,ℓ(W)|^2/√(αβ), k,ℓ=1,…,2K,; Υ_k(W, α)≜||_k(W)||^2/√(α), k=1,…,2K,; Φ_ℓ,m(W_m, α,β)≜||h_ℓ,m^HW_m||^2 /√(αβ), ℓ=1,…,2K; m=1, …, M. ]
which are convex <cit.>.
The optimization problem (<ref>) can be equivalently rewritten as
max_W∈ℂ^N× N, α∈ℝ_+^2K, β∈ℝ_+^2K f(W,α,
β)≜min_k=1,… ,K 1/r_k[ln (1+|_k,K+k(W)|^2/√(α_kβ_K+k)).
.+ln (1+|_K+k,k(W)|^2/√(α_K+kβ_k)) ]
∑_ℓ=1, ℓ≠ k,χ(k)^2KΨ_k,ℓ(W, α_k, β_ℓ)
+σ_R^2 Υ_k(W,α_k)+σ_k^21/√(α)_k≤ 1, k=1,…,2K
β_k ≥1/(P_k^U,max)^2, k=1,…,2K,
P_sum^U(β):=∑_k=1^2K1/√(β_k)≤ P_sum^U,max,
∑_ℓ=1^2KΦ_ℓ,m(W_m,1,β_ℓ) +σ_R^2||W_m||^2 ≤ P_m^A,max, m=1,…,M,
∑_m=1^M[∑_ℓ=1^2KΦ_ℓ,m(W_m,1,β_ℓ) +σ_R^2||W_m||^2 ]
≤ P_sum^R,max.
One can see that γ_k(p, W) in (<ref>) is expressed in terms of functions in (<ref>) as
γ_k(p, W) = |_k,χ(k)(W)|^2/1/p_χ(k)
[∑_ℓ=1, ℓ≠ k,χ(k)^2KΨ_k,ℓ(W,1,1/p_ℓ^2)
+σ_R^2Υ_k(W,1)+σ^2_k]
which is
|_k,χ(k)(W)|^2/√(α_kβ_χ(k))
for
β_k=1/p_k^2, k=1,…, 2K
and
α_k=∑_ℓ=1, ℓ≠ k,χ(k)^2KΨ_k,ℓ(W,1,β_ℓ)
+σ_R^2Υ_k(W,1)+σ^2_k.
Therefore,
φ(𝐩,𝐖)=f(𝐖,α,β)
for α and β defined by (<ref>)-(<ref>), which is also feasible for (<ref>)
whenever (𝐩,𝐖) is feasible for (<ref>). We thus have proved that
max (<ref>) ≤max (<ref>).
Note that (<ref>) is the same as
∑_ℓ=1, ℓ≠ k,χ(k)^2KΨ_k,ℓ(W,1,β_ℓ)
+σ_R^2Υ_k(W,1)+σ^2_k≤√(α_k).
The point (𝐩,𝐖) with p_k=1/√(β)_k, k=1,…, 2K is feasible for (<ref>)
whenever (𝐖,α, β) is feasible for (<ref>). Using (<ref>) and (<ref>),
one can see
f(W,α,β)≤φ(p,W),
implying
max (<ref>) ≤max (<ref>).
The last inequality together with (<ref>) yield
max (<ref>) ≤max (<ref>),
completing the proof of Theorem <ref>.
The benefit of expressing (<ref>) by (<ref>) is that all constraints in the latter
are convex so the computational difficulty is concentrated in its objective function, which is lower bounded by
a concave function based on the following result.
At (W^(κ),α^(κ),β^(κ)) it
is true that
ln (1+|_k,χ(k)(W)|^2/√(α_kβ_χ(k))) ≥ f^(κ)_k,χ(k)(W,α_k,β_χ(k))
≜ a_k,χ(k)^(κ)-b_k,χ(k)^(κ)√(α_k^(κ)β_χ(k)^(κ))
[2{_k,χ(k)(W)(_k,χ(k)(W^(κ)))^*}
-1/2|_k,χ(k)(W^(κ))|^2(α_k/α^(κ)_k
+β_χ(k)/β^(κ)_χ(k))
]^-1
over the trust region
2{_k,χ(k)(W)(_k,χ(k)(W^(κ)))^*}
-1/2|_k,χ(k)(W^(κ))|^2(α_k/α^(κ)_k
+β_χ(k)/β^(κ)_χ(k))>0,
for x_k,χ(k)^(κ)=|_k,χ(k)(W^(κ))|^2/√(α^(κ)_kβ^(κ)_χ(k)),
[ a_k,χ(k)^(κ)=ln(1+x_k,χ(k)^(κ))+x_k,χ(k)^(κ)/x_k,χ(k)^(κ)+1>0,; b_k,χ(k)^(κ)=(x_k,χ(k)^(κ))^2/x_k,χ(k)^(κ)+1>0. ]
We use the following inequalities with their proof given in the Appendix:
ln(1+x) ≥ln(1+x̅) +x̅/x̅+1
-x̅^2/x̅+11/x ∀ x>0, x̅>0
and
|x|^2/√(αβ)≥ 2{xx̅^*}/√(α̅β̅)
-1/2|x̅|^2/√(α̅β̅)
(α/α̅+β/β̅) ∀ x∈ℂ, x̅∈ℂ,α>0,
α̅>0, β>0,
β̅>0,
which is the same as
√(αβ)/|x|^2≤√(α̅β̅)/2{xx̅^*}
-1/2|x̅|^2(α/α̅+β/β̅) ∀ x∈ℂ, x̅∈ℂ,α>0,
α̅>0, β>0,
β̅>0.
Applying (<ref>) and (<ref>)
for x=|_k,χ(k)(W)|^2/√(α_kβ_χ(k)) and x̅=|_k,χ(k)(W^(κ))|^2/√(α_k^(κ)β_χ(k)^(κ))
yields
[ ln (1+|_k,χ(k)(W)|^2/√(α_kβ_χ(k))) ≥ a_k,χ(k)^(κ)-b_k,χ(k)^(κ)√(α_kβ_χ(k))/|_k,χ(k)(W)|^2; ≥ a_k,χ(k)^(κ)-b_k,χ(k)^(κ)√(α_k^(κ)β_χ(k)^(κ))
[2{_k,χ(k)(W)(_k,χ(k)(W^(κ)))^*}; -1/2|_k,χ(k)(W^(κ))|^2(α_k/α^(κ)_k
+β_χ(k)/β^(κ)_χ(k))
]^-1, ]
showing (<ref>).
Accordingly, for a feasible (W^(κ), α^(κ), β^(κ)) of (<ref>)
found at the (κ-1)th iteration, the following convex optimization problem is solved
at the κth iteration to generate the next feasible (W^(κ+1), α^(κ+1), β^(κ+1)):
[ max_W∈ℂ^N× N, α∈ℝ_+^2K, β∈ℝ_+^2K
f^(κ)(W,α,
β)≜min_k=1,… ,K 1/r_k
[f^(κ)_k,K+k(W,α_k,β_K+k)+f^(κ)_K+k,k(W,α_K+k,β_k)]; (<ref>)-(<ref> ), (<ref>). ]
Note that
f(W^(κ+1),α^(κ+1),β^(κ+1))≥ f^(κ)(W^(κ+1),α^(κ+1),
β^(κ+1))
by (<ref>), and
f(W^(κ),α^(κ),β^(κ))= f^(κ)(W^(κ),α^(κ),
β^(κ))
because
ln (1+|_k,χ(k)(W^(κ))|^2/√(α_k^(κ)β_χ(k)^(κ)))=f^(κ)_k,χ(k)(W^(κ),α_k^(κ),
β_χ(k)^(κ)), k=1,…, 2K.
On the other hand, as (W^(κ),α^(κ),β^(κ)) and
(W^(κ+1),α^(κ+1),β^(κ+1)) are a feasible point and the optimal
solution of the convex optimization problem (<ref>), it is true that
f^(κ)(W^(κ+1),α^(κ+1),
β^(κ+1))>f^(κ)(W^(κ),α^(κ),
β^(κ))
as far as (W^(κ),α^(κ),β^(κ))≠
(W^(κ+1),α^(κ+1),β^(κ+1)). The point (W^(κ+1),α^(κ+1),β^(κ+1)) is then better than (W^(κ),α^(κ),β^(κ)) because
f(W^(κ+1),α^(κ+1),β^(κ+1))≥ f^(κ)(W^(κ+1),α^(κ+1)>f^(κ)(W^(κ),α^(κ),
β^(κ))=f(W^(κ),α^(κ),β^(κ)).
Analogously to <cit.>, it can be shown that the sequence { (W^(κ),α^(κ),β^(κ)) } at least converges to a locally
optimal solution of the exchange throughput optimization problem (<ref>). The proposed Algorithm <ref>
for (<ref>) thus terminates after finitely many iteration, yielding an optimal solution
(𝐖^opt,α^opt,β^opt)
within tolerance ϵ>0. Then (𝐖^opt,p^opt) with p^opt=
(1/√(β^opt_1),…,1/√(β^opt_2K))^T
is accepted as the computational solution of the maximin
exchange throughput optimization problem (<ref>).
Before closing this section, it is pointed out that the one-way
relay optimization in which UE k sends information to
UE K+k can be formulated as in (<ref>) by
setting γ_k=0 and p_K+k=0 and thus can be directly solved by Algorithm <ref>.
§ ENERGY EFFICIENCY MAXIMIZATION
Return to consider the EE maximization problem (<ref>). It is worth noticing that the computational solution for the
QoS constrained sum throughput maximization problem
max_W∈ℂ^N× N, p∈ℝ_+^2K∑_k=1^K[ln (1+ γ_k(p, W))+ln (1+γ_K+k(p, W))]
(<ref>), (<ref>), (<ref>), (<ref>), (<ref>),
which is a particular case of (<ref>), is still largely open. As a by-product, our approach to computation for (<ref>)
is directly applicable to that for (<ref>).
Similarly to Theorem <ref>, it can be shown that (<ref>) is equivalently expressed by the following optimization
problem under the variable change (<ref>):
max_W∈ℂ^N× N, α∈ℝ_+^2K, β∈ℝ_+^2K F(W,α,
β)≜∑_k=1^K
[ln (1+|_k,K+k(W)|^2/√(α_kβ_K+k))+ln (1+|_K+k,k(W)|^2/√(α_K+kβ_k))]/π(β,W)
(<ref>)-(<ref>),
R̃_k(𝐖,α,β)≜ln (1+|_k,K+k(W)|^2/√(α_kβ_K+k))+ln (1+|_K+k,k(W)|^2/√(α_K+kβ_k))≥ r_k, k=1,…, K,
where the consumption power function π(β,W) is defined by
π(β,W)≜∑_k=1^2Kζ/√(β_k)+ζ∑_m=1^M[∑_ℓ=1^2KΦ_ℓ,m(W_m,1,β_ℓ) +σ_R^2||W_m||^2 ]+MP^ R+2KP^ U.
In Dinkelbach's iteration based approach (see e.g. <cit.>), one aims to find through bisection
a value τ such that the optimal value of the following optimization problem is zero
max_W∈ℂ^N× N, α∈ℝ_+^2K, β∈ℝ_+^2K[∑_k=1^K(ln (1+|_k,K+k(W)|^2/√(α_kβ_K+k))+ln (1+|_K+k,k(W)|^2/√(α_K+kβ_k))).
.- π(β,W)] (<ref>)-(<ref>). (<ref>).
Such τ obviously is the optimal value of (<ref>). However, for each τ, (<ref>) is still
nonconvex and as computationally difficult as the original optimization problem (<ref>).
There is no benefit to use (<ref>).
To address computation for (<ref>) involving the nonconcave objective function F(W,α,
β) and the nonconvex constraint (<ref>),
we will explore the following inequality for positive quantities, whose proof is given
in the Appendix:
ln(1+x)/t ≥ 2ln(1+x̅)/t̅+x̅/t̅(x̅+1)-x̅^2/(x̅+1)t̅1/x-
ln(1+x̅)/t̅^2t
∀ x>0, x̅>0, t>0, t̅>0.
The right-hand-side (RHS) of (<ref>) is a concave function on the interior domain of R^2_+
and agrees with the left-hand-side (LHS) at (x̅,t̅).
Suppose that (W^(κ),α^(κ),β^(κ)) is a feasible point of (<ref>)
found from the (κ-1)th iteration. Applying (<ref>) for
x=|_k,χ(k)(W)|^2/√(α_kβ_χ(k)), t=π(β,W)
and
x̅=|_k,χ(k)(W^(κ))|^2/√(α_k^(κ)β_χ(k)^(κ)),
t̅=π(β^(κ),W^(κ))
yields
ln (1+|_k,χ(k)(W)|^2/√(α_kβ_χ(k)))/π(β,W) ≥ F_k,χ(k)^(κ)(W,α_k,
β)
≜ p_k,χ(k)^(κ)- q_k,χ(k)^(κ)√(α^(κ)_kβ^(κ)_χ(k))[2{_k,χ(k)(W)(_k,χ(k)(W^(κ)))^*}
-1/2|_k,χ(k)(W^(κ))|^2(α_k/α^(κ)_k
+β_χ(k)/β^(κ)_χ(k))
]^-1
-r_k,χ(k)^(κ)π(β,W),
over the trust region (<ref>), for x_k,χ(k)^(κ)=|_k,χ(k)(W^(κ))|^2/√(α^(κ)_kβ^(κ)_χ(k)),
[ t^(κ) = π(β^(κ),W^(κ)),; p_k,χ(k)^(κ) = 2ln(1+x_k,χ(k)^(κ))/t^(κ)+
x_k,χ(k)^(κ)/t^(κ)(x_k,χ(k)^(κ)+1)>0; q_k,χ(k)^(κ) = (x_k,χ(k)^(κ))^2/(x_k,χ(k)^(κ)+1)t^(κ)>0,; r_k,χ(k)^(κ) = ln(1+x_k,χ(k)^(κ))/(t^(κ))^2>0. ]
Accordingly, the following convex optimization problem is solved at the κth iteration to generate the next
iterative point (W^(κ+1),α^(κ+1),β^(κ+1)):
max_W∈ℂ^N× N, α∈ℝ_+^2K, β∈ℝ_+^2K F(W,α,
β)≜∑_k=1^K
[F^(κ)_k,K+k(W,α_k,β)+F^(κ)_K+k,k(W,α_K+k,β)]
(<ref>)-(<ref> ), (<ref>), (<ref>),
f^(κ)_k,K+k(W,α_k,β_K+k) +f^(κ)_K+k,k(W,α_K+k,β_k)
≥ r_k, k=1,…, K,
where f^(κ)_k,χ(k) are defined from (<ref>). Since,
f^(κ)_k,K+k(W,α_k,β_K+k) +f^(κ)_K+k,k(W,α_K+k,β_k)≤R̃_k(𝐖,α,β)
it follows that the feasibility of the nonconvex constraint (<ref>) in (<ref>) is guaranteed by that
of (<ref>). Also,
f^(κ)_k,K+k(W^(κ),α_k^(κ),β_K+k^(κ)) +f^(κ)_K+k,k(W^(κ),α_K+k^(κ),β_k^(κ))=
R̃_k((W^(κ),α^(κ),β^(κ)))≥ r_min
because (W^(κ),α^(κ),β^(κ)) is feasible for (<ref>) and thus
feasible for (<ref>). Therefore, the convex optimization problem (<ref>) is always feasible. Analogously
to the previous section, the sequence {(W^(κ),α^(κ),β^(κ))} is seen
convergent at least to a locally optimal solution of problem (<ref>) and as thus the proposed
Algorithm terminates after finitely many iteration, yielding an optimal solution
(W^(opt),α^opt,β^opt) within tolerance ϵ. Then
(W^opt,p^opt) with p^opt=(1/√(β_1^opt),...,1/√(β_2K^opt))^T is accepted as
the computational solution of the EE maximization problem (<ref>).
To find an initial feasible point (W^(0), α^(0), β^(0)) for Algorithm <ref>,
we use Algorithm <ref> for
computing (<ref>), which terminates upon
[min_k=1,… ,K R_k(W^(κ), α^(κ), β^(κ))/r_k]≥ r_min
to satisfy (<ref>).
For the QoS constraints (<ref>) instead of (<ref>), which by the variable change (<ref>)
are equivalent to the following constraints
|_k,χ(k)(W)|^2-(e^r̃_min-1)√(α_kβ_χ(k))≥ 0, k=1,…, 2K.
The LHS of (<ref>) is a convex functions, so (<ref>) is called reverse convex <cit.>, which can
be easily innerly approximated by linear approximation of the LHS at (W^(κ),α_k^(κ),
β_χ(k)^(κ)).
Remark. To compare the energy efficiency with one-way communication we need to revisit the one-way model: the users
{1,⋯,K} send symbols (s_1,⋯,s_K)^T∈ℂ^K via the relays in the first stage and the
users {K+1,...,2K} send symbols (s_K+1,⋯,s_2K)^T∈ℂ^K via the relays in the second state.
Denote by W^1_m and W^2_m the beamforming matrices for the received signals from the users
{1,⋯,K} and {K+1,⋯,2K}, respectively. The transmit power at relay m in forwarding signals to users {K+1,⋯, 2K} in the first stage is
P_m^A,1(p^1,W^1_m) = ∑_ℓ=1^K p_ℓ ||W^1_mh_ℓ,m||^2+
σ_R^2||W^1_m||^2,
and the transmit power at relay m in forwarding signals to users {1,⋯, K} in the second stage is
[ P_m^A,2(p^2,W^2_m) = ∑_ℓ=1^K p_ℓ+K ||W^2_mh_ℓ,m||^2+
σ_R^2||W^2_m||^2. ]
Therefore, the power constraint at relay m is
∑_ℓ=1^K
( p_ℓ||W^1_mh_ℓ,m||^2+ p_ℓ+K||W^2_mh_ℓ+K,m||^2)+
σ_R^2(||W^1_m||^2+||W^2_m||^2)≤ P_m^A,max,
m=1,...,M.
The total power constraint is
∑_m=1^M[∑_ℓ=1^K
( p_ℓ||W^1_mh_ℓ,m||^2+ p_ℓ+K||W^2_mh_ℓ+K,m||^2)+
σ_R^2(||W^1_m||^2+||W^2_m||^2)
]≤ P_sum^R,max.
Accordingly, for k=1,⋯, K, the SINR at UEs can be calculated as:
γ̃_K+k(p^1, W^1) = p_k|∑_m=1^Mf_m,K+k^H
W^1_mh_k,m|^2/∑_ℓ=1, ℓ≠ k^K p_ℓ|∑_m=1^Mf_m,K+k^H W^1_mh_ℓ,m|^2
+σ_R^2∑_m=1^M||f_m,K+k^H W^1_m||^2+σ^2_K+k
= p_k|_K+k,k(W^1)|^2/
[∑_ℓ=1, ℓ≠ k^K p_ℓ|_K+k,ℓ(W^1)|^2
+σ_R^2||_K+k(W^1)||^2 +σ^2_K+k]
and
γ̃_k(p^2, W^2) = p_K+k|∑_m=1^Mf_m,k^H
W^2_mh_K+k,m|^2/∑_ℓ=1, ℓ≠ k^K p_K+ℓ|∑_m=1^Mf_m,k^H W^2_mh_K+ℓ,m|^2
+σ_R^2∑_m=1^M||f_m,k^H W^2_m||^2+σ^2_k
= p_K+k|_k,K+k(W^2)|^2/
[∑_ℓ=1, ℓ≠ k^K p_K+ℓ|_k,K+ℓ(W^2)|^2
+σ_R^2||_k(W^2)||^2 +σ^2_k].
The EE maximization problem is then formulated as
max_W^1∈ℂ^N× N, W^2∈ℂ^N× N, p∈ℝ_+^2K 1/2∑_k=1^K[ln (1+ γ̃_k(p^2, W^2))+ln (1+γ̃_K+k(p^1, W^1))]/ζ(P^U_ sum(𝐩)+P̃^R_ sum(𝐩,
𝐖))+2MP^ R+2KP^ U
(<ref>), (<ref>), (<ref>), (<ref>),
ln (1+ γ̃_k(p^2, W^2))+ln (1+γ̃_K+k(p^1, W^1))
≥ r_min,
k=1,⋯, K.
The pre-log factor 1/2 in the numerator of (<ref>) is to account for two stages needed in
communicating s_1,⋯, 2K and the non transmission power consumption at the relays 2MP^ R to
reflect the fact that the relays have to transmit twice.
§ NUMERICAL RESULTS
This sections evaluates the proposed algorithms through the simulation.
The channels in the receive signal equations (<ref>) and (<ref>) are assumed Rayleigh fading,
which are modelled by independent circularly-symmetric complex Gaussian random variables
with zero mean and unit variance, while the background noises n_R,m
and n_k are also normalized, i.e., σ_R^2=σ_k^2=1. The computation tolerance
for terminating the Algorithm is ϵ=10^-4.
The numerical results are averaged over 1,000 random channel realizations.
Without loss of generality, simply set P_k^U, max≡ P^U, max, and P_n^A, max≡ P^A, max, P_sum^U,max=KP^U, max, and P_sum^R,max=MP^A, max/2.
P^U, max is fixed at 10 dBW but the relay power budget P_sum^R,max
varies from 0 to 30 dBW. The drain efficiency of power amplifier 1/ζ is set
to be 40 %. As in <cit.>, the circuit powers of each antenna in relay and UE are 0.97 dBW and -13 dBW, respectively. We consider the scenarios of K ∈{1,2,3} pairs
and (M,N_R) ∈{(1, 8), (2, 4), (4, 2)}, i.e. the total number of antennas is fixed at 8 but the number of relays
is M∈{1, 2, 4}.
§.§ Maximin exchange throughput optimization
This subsection analyses the exchange throughput achieved by TWR.
The jointly optimal power and relay beamforming is referred as OP-OW, while
the optimal beamforming weights with UE equal power allocation are referred as OW. The initial points for Algorithm
<ref> is chosen from the OW solutions.
To compare with the numerical result of <cit.>, r_k≡ 1 is set for (<ref>).
Fig. <ref>, <ref> and <ref> plot the
achievable minimum pair exchange throughput versus the relay power budget P_sum^R,max with K ∈{1,2,3}.
The improvement by OPOW over OW is significant for K=2 and K=3.
The throughput gain is more considerable with higher P_sum^R,max. It is also observed that
using less relays achieves better throughput.
The result in Fig. <ref> for K=2 is consistent with that in <cit.>.
Table <ref> provides the average number of iterations of Algorithm <ref>. As can be observed,
Algorithm <ref> converges in less than 24 iterations in all considered scenarios.
§.§ EE maximization
This subsection examines the performance of energy efficiency achieved by Algorithm <ref>.
r_k in (<ref>) is set as the half of the optimal value obtained by Algorithm <ref>.
Firstly, the simulation results presented in Fig. <ref>, <ref> and <ref> are for
K=2 and (M,N_R) ∈{(1, 8), (2, 4), (4, 2)}.
Fig. <ref> compares the EE performance achieved by TWR, one-way relaying
and TWR with UE fixed equal power allocation, which is labelled as "other method".
It is clear from Fig. <ref> that TWR clearly and significantly outperforms one-way relaying and
TWR UE equal power allocation.
Under small transmit power regime, the power consumption in the denominator is dominated by the circuit power and
the EE is maximized by maximizing the sum throughput in the numerator. As such all the EE, the sum
throughput and transmit power increase in Fig. <ref>, <ref> and
<ref> in the relay power budget P_sum^R,max.
However, under larger transmit power regime, where the denominator of EE is dominated by the actual transmit power,
the EE becomes to be maximized by minimizing the transmit power in the denominator,
which saturates when beyond a threshold. When the transmit power saturates in Fig. <ref>, both the
EE and the sum throughput accordingly saturate in Fig. <ref> and <ref>.
It is also observed that for a given relay power budget and a given number of total antennas in all relays, the configuration
with less relays is superior to the ones with more relays. This is quite expected since the configuration
with less relays achieves higher throughput than the ones with more relays.
Table <ref> shows that Algorithm <ref> converges
in less than 26 iterations.
Similar comparisons are provided in Fig. <ref> and <ref> for K=1 and K=3, respectively
with the superior EE performances of TWR over one-way relaying and TWR UE equal power allocation observed.
§ CONCLUSIONS
Joint UE power allocation and relay beamforming to either satisfy a UE's QoS requirement or
maximize the energy efficiency of TWR serving multiple UEs is a very difficult nonconvex optimization problem.
This paper has developed two path-following computational procedures for their solutions, which invoke a simple
convex quadratic program of low computational complexity at each iteration.
Simulation results have confirmed their rapid convergence. We have shown that TWR achieves much higher energy-efficiency
than its one-way relaying counterpart in all considered scenarios.
§ APPENDIX: PROOF FOR INEQUALITIES (<REF>), (<REF>) AND (<REF>)
The function ψ_1(z)=ln (1+z^-1) is convex on the domain z>0 <cit.>. Therefore <cit.>
ψ_1(z)≥ψ(z̅) +∇ψ_1(z̅)(z-z̅) ∀ z>0, z̅>0,
which is seen as
ln (1+z^-1)≥ln(1+z̅^-1)+1/z̅+1-z/(z̅+1)z̅ ∀ z>0, z̅>0.
The inequality (<ref>) is obtained by substituting x=z^-1 and x̅=z̅^-1 into (<ref>).
Next, the function ψ_2(x,α,β)=|x|^2/√(αβ) is convex on the domain
z∈ℂ, α>0 and β>0 <cit.>, so again
ψ_2(x,α,β)≥ψ_2(x̅,α̅,β̅)+∇ψ_2(x̅,α̅,β̅), (x,α,β)-(x̅,α̅,β̅),
which is seen as (<ref>).
Finally, by checking its Hessian, the function ψ_3(z,t)=(ln (1+z^-1))/t is seen to be
convex on the interior of R^2_+.
Therefore,
ψ_3(z,t)≥ψ_2(z̅,t̅)+∇ψ_3(z̅,t̅), (z,t)-(z̅,t̅) ∀ z>0, z̅>0, t>0, t̅>0,
which is seen as
ln(1+z^-1)/t≥ 2ln(1+z̅^-1)/t̅+1/(z̅+1)t̅-
z/(z̅+1)z̅t̅-ln(1+z̅)/t̅^2t.
The inequality (<ref>) follows by substituting x=z^-1 and x̅=z̅^-1 into the last inequality.
10
Amarasuriya12
G. Amarasuriya, C. Tellambura, and M. Ardakani, “Two-way amplify-and-forward
multiple-input multiple-output relay networks with antenna selection,” IEEE J. Sel. Areas Commun., vol. 30, pp. 1513–1529, Sep. 2012.
Chung12
H. Chung, N. Lee, B. Shim, and T. Oh, “On the beamforming design for MIMO
multipair two-way relay channels,” IEEE Trans. Vehicular Technology,
vol. 16, pp. 3301–3306, Sep. 2012.
Getal13
D. Gunduz, A. Yener, A. Goldsmith, and H. V. Poor, “The multiway relay
channel,” IEEE Trans. Inf. Theory, vol. 59, pp. 51–63, Jan 2013.
AWM14
A. Asadi, Q. Wang, and V. Mancuso, “A survey device-to-device communication in
cellular networks,” IEEE Commun. Surveys & Tut., vol. 16,
pp. 1801–1819, 4th Qart. 2014.
Liuetal15
J. Liu, N. Kato, J. Ma, and N. Kadowaki, “Device-to-device communication in
LTE-advanced networks: A survey,” IEEE Commun. Surveys Tuts.,
vol. 17, pp. 1923–1940, 4th Quart. 2015.
ZhangL09
R. Zhang, Y.-C. Liang, C. C. Chai, and S. Cui, “Optimal beamforming for
two-way multi-antenna relay channel with analogue network coding,” IEEE
J. Sel. Areas Commun., vol. 27, pp. 699–712, Jul. 2009.
Zeng11
M. Zeng, R. Zhang, and S. Cui, “On design of collaborative beamforming for
two-way relay networks,” IEEE Trans. Signal Prosess., vol. 59,
pp. 2284–2295, May 2011.
Chalise10
B. K. Chalise and L. Vandendorpe, “Optimization of MIMO relays for
multipoint-to-multipoint communications: Nonrobust and robust designs,” IEEE Trans. Signal Prosess., vol. 58, pp. 6355–6368, Dec. 2010.
Phan13
A. H. Phan, H. D. Tuan, H. H. Kha, and H. H. Nguyen, “Iterative D.C.
optimization of precoding in wireless MIMO relaying,” IEEE Trans.
Wireless Commun., vol. 12, pp. 1617–1627, Apr. 2013.
Cheng11
W. Cheng, M. Ghogho, Q. Huang, D. Ma, and J. Wei, “Maximizing the sum-rate of
amplify-and-forward two-way relaying networks,” IEEE Signal Prosess.
Lett., vol. 18, pp. 635–638, Nov. 2011.
Amirani12
M. Zaeri-Amirani, S. Shahbazpanahi, T. Mirfakhraie, and K. Ozdemir,
“Performance tradeoffs in amplify-and-forward bidirectional network
beamforming,” IEEE Trans. Signal Prosess., vol. 60, pp. 4196–7209,
Aug. 2012.
Tuybook
H. Tuy, Convex Analysis and Global Optimization (second edition).
Springer, 2016.
Zhang12
J. Zhang, F. Roemer, M. Haardt, A. Khabbazibasmenj, and S. A. Vorobyov, “Sum
rate maximixation for multi-pair two-way relaying single-antenna amplify and
forward relays,” in Proc. IEEE Int. Conf. Acoust Speech Signal Prosess.
(ICASSP), Mar. 2012.
Tao12
M. Tao and R. Wang, “Linear precoding for multi-pair two-way MIMO relay
systems with max-min fairness,” IEEE Trans. Signal Prosess., vol. 60,
pp. 5361–5370, Oct. 2012.
Wang12
R. Wang, M. Tao, and Y. Huang, “Linear precoding designs for
amplify-and-forward multiuser two-way relay systems,” IEEE Trans.
Wireless Commun., vol. 11, pp. 4457–4469, Dec. 2012.
KRVH12
A. Khabbazibasmenj, F. Roemer, S. A. Vorobyov, and M. Haardt, “Sum-rate
maximization in two-way AF MIMO relaying: Polynomial time solutions to a
class of DC programming problems,” IEEE Trans. Signal Prosess.,
vol. 60, pp. 5478–5493, Dec. 2012.
Khandaker11
M. R. A. Khandaker and Y. Rong, “Joint power control and beamforming for
peer-to-peer MIMO relay systems,” in Proc. IEEE Int. Conf. Wireless
Commun. Signal Prosess. (WCSP), Nov. 2011.
ChengPesavento12
Y. Cheng and M. Pesavento, “Joint optimization of source power allocation and
distributed relay beamforming in multiuser peer-to-peer relay networks,”
IEEE Trans. Signal Prosess., vol. 60, pp. 2962–2973, Jun. 2012.
Jing12
Y. Jing and S. Shahbazpanahi, “Max-min optimal joint power control and
distributed beamforming for two-way relay networks under per-node power
constraints,” IEEE Trans. Signal Prosess., vol. 60, pp. 6576–6589,
Dec. 2012.
Raetal14
U. Rashid, H. D. Tuan, H. H. Kha, and H. H. Nguyen, “Joint optimization of
source precoding and relay beamforming in wireless MIMO relay networks,”
IEEE Trans. Commun., vol. 62, no. 2, pp. 488–499, 2014.
Khetal13
H. H. Kha, H. D. Tuan, H. H. Nguyen, and H. H. M. Tam, “Joint design of user
power allocation and relay beamforming in two-way mimo relay networks,” in
Proc. 7th Inter. Conf. Signal Process. Commun. Syst. (ICSPCS),
pp. 1–6, 2013.
TNT17
H. D. Tuan, D. T. Ngo, and H. H. M. Tam, “Joint power allocation for
MIMO-OFDM full-duplex relaying communications,” EURASIP J. Wireless
Commun. Networking, 2017, DOI 10.1186/s13638-016-0800-4.
BJDO14
E. Bjornson, E. A. Jorswieck, M. Debbah, and B. Ottersten, “Multiobjective
signal processing optimization: The way to balance conflicting metrics in
5G systems,” IEEE Signal Process. Mag., vol. 31, pp. 14–23, Nov.
2014.
Feetal11
A. Fehske, G. Fettweis, J. Malmodin, and G. Biczok, “The global footprint of
mobile communications: The ecological and economic perspective,” IEEE
Commun. Mag., vol. 49, pp. 55–72, Aug. 2011.
Buetal16
S. Buzzi, C.-L. I, T. E. Klein, H. V. Poor, C. Yang, and A. Zappone, “A survey
of energy-efficient techniques for 5G networks and challenges ahead,” IEEE J. Select. Areas Commun., vol. 34, pp. 697–709, Apr. 2016.
XLL15
C. Xiong, L. Lu, and G. Y. Li, “Energy-efficient OFDMA-based two-way
relay,” IEEE Trans. Commun., vol. 63, pp. 3157–3169, Sep. 2015.
ZH15
J. Zhang and M. Haardt, “Energy efficient two-way non-regenerative relaying
for relays with multiple antennas,” IEEE Signal Process. Lett.,
vol. 22, pp. 1079–1083, Aug. 2015.
WLTP16
Z. Wang, L. Li, H. Tian, and A. Paulraj, “User-centric precoding designs for
the non-regenerative MIMO two-way relay systems,” IEEE Commun.
Lett., vol. 20, pp. 1935–1938, Oct. 2016.
Phan12
A. H. Phan, H. D. Tuan, H. H. Kha, and D. T. Ngo, “Nonsmooth optimization for
efficient beamforming in cognitive radio multicast transmission,” IEEE
Trans. Signal Process., vol. 60, pp. 2941–2951, Jun. 2012.
KTN12
H. H. Kha, H. D. Tuan, and H. H. Nguyen, “Fast global optimal power allocation
in wireless networks by local D.C. programming,” IEEE Trans. on
Wireless Commun., vol. 1, pp. 510–515, Feb. 2012.
DM08
B. Dacorogna and P. Marechal, “The role of perspective functions in convexity,
polyconvexity, rank-one convexity and separate convexity,” J. of Convex
Analysis, vol. 15, no. 2, pp. 271–284, 2008.
TTN16
H. H. M. Tam, H. D. Tuan, and D. T. Ngo, “Successive convex quadratic
programming for quality-of-service management in full-duplex MU-MIMO
multicell networks,” IEEE Trans. Comm., vol. 64, pp. 2340–2353, Jun.
2016.
Xiong2012
C. Xiong, G. Y. Li, S. Zhang, Y. Chen, and S. Xu, “Energy-efficient resource
allocation in OFDMA networks,” IEEE Trans. Commun., vol. 60,
pp. 3767–3778, Dec. 2012.
Taetal16
H. M. T. Ho, H. D. Tuan, D. Ngo, T. Q. Duong, and V. Poor, “Joint load
balancing and interference management for small-cell heterogeneous networks
with limited backhaul capacity,” IEEE Trans. Wireless Commun.,
vol. PP, no. 99, pp. 1–1, 2016.
|
http://arxiv.org/abs/1701.07690v1 | 20170126132955 | Harnack inequality for subordinate random walks | [
"Ante Mimica",
"Stjepan Šebek"
] | math.PR | [
"math.PR",
"60J45"
] |
3D Printing of Polymer Bonded Rare-Earth Magnets With a Variable Magnetic Compound Density for a Predefined Stray Field
D. Suess
December 30, 2023
=======================================================================================================================
In this paper, we consider a large class of subordinate random walks X on integer lattice ℤ^d via subordinators with Laplace exponents which are complete Bernstein functions satisfying a certain lower scaling condition at zero. We establish estimates for one-step transition probabilities, the Green function and the Green function of a ball, and prove the Harnack inequality for non-negative harmonic functions.
2010 Mathematics Subject Classification: Primary: 60J45; Secondary: 60G50, 60J10, 05C81
Keywords and phrases: random walk, subordination, Harnack inequality, harmonic function, Green function, Poisson kernel
§ INTRODUCTION
Let (Y_k)_k ≥ 1 be a sequence of independent, identically distributed random variables defined on a probability space (Ω, , ), taking values in the integer lattice ^d, with distribution (Y_k = e_i) = (Y_k = -e_i) = 1/2d, i = 1, 2, …, d, where e_i is the i-th vector of the standard basis for ^d. A simple symmetric random walk in ^d (d ≥ 1) starting at x ∈^d is a stochastic process Z = (Z_n)_n ≥ 0, with Z_0 = x and Z_n = x + Y_1 + ⋯ + Y_n.
Let Z = (Z_n)_n ≥ 0 be a simple symmetric random walk in ^d starting at the origin. Further, let
ϕ(λ) := ∫_⟨ 0, ∞⟩1 - e^-λ tμ ( t)
be a Bernstein function satisfying ϕ(1)=1. Here μ is a measure on ⟨ 0,∞⟩ satisfying ∫_⟨ 0,∞⟩(1 ∧ t) μ( t) < ∞ called the Lévy measure. For m ∈ denote
c_m^ϕ := ∫_⟨ 0, ∞⟩t^m/m!e^-tμ (dt)
and notice that
∑_m = 1^∞c_m^ϕ = ∫_⟨ 0, ∞⟩(e^t - 1) e^-tμ( t) = ∫_⟨ 0, ∞⟩(1 - e^-t) μ( t) = ϕ(1) = 1.
Hence, we can define a random variable R with (R = m) = c_m^ϕ, m ∈. Now we define the random walk T = (T_n)_n ≥ 0 on _+ by T_n := ∑_k = 1^nR_k, where (R_k)_k ≥ 1 is a sequence of independent, identically distributed random variables with the same distribution as R and independent of the process Z. Subordinate random walk is a stochastic process X = (X_n)_n ≥ 0 which is defined by X_n := Z_T_n, n ≥ 0. It is straightforward to see that the subordinate random walk is indeed a random walk. Hence, there exists a sequence of independent, identically distributed random variables (ξ_k)_k ≥ 1 with the same distribution as X_1 such that
X_n d=∑_k = 1^n ξ_k, n ≥ 0.
We can easily find the explicit expression for the distribution of the random variable X_1:
(X_1 = x)
= (Z_T_1 = x) = (Z_R_1 = x) = ∑_m = 1^∞(Z_R_1 = x | R_1 = m) c_m^ϕ
= ∑_m = 1^∞∫_⟨ 0, ∞⟩t^m/m! e^-tμ(dt)(Z_m = x), x ∈^d.
We denote the transition matrix of the subordinate random walk X with P, i.e. P = (p(x, y) : x, y ∈^d), where p(x, y) = (x + X_1 = y).
We will impose some additional constraints on the Laplace exponent ϕ. First, ϕ will be a complete Bernstein function <cit.> and it will satisfy the following lower scaling condition: there exist 0 < γ_1 < 1 and a_1 > 0 such that
ϕ(R)/ϕ(r)≥ a_1 R/r^γ_1, ∀ 0 < r ≤ R≤ 1.
In dimension d ≤ 2, we additionally assume that there exist γ_1 ≤γ_2 < 1 and a_2 > 0 such that
ϕ(R)/ϕ(r)≤ a_2 R/r^γ_2, ∀ 0 < r ≤ R≤ 1.
It is well known that, if ϕ is a Bernstein function, then ϕ(λ t) ≤λϕ(t) for all λ≥ 1, t > 0, implying ϕ(v)/v ≤ϕ(u)/u, 0 < u ≤ v. Using these two facts, proof of the next lemma is straightforward.
If ϕ is a Bernstein function, then for all λ, t > 0, 1 ∧λ≤ϕ(λ t) / ϕ(t) ≤ 1 ∨λ.
Using Lemma <ref> we get ϕ(R) / ϕ(r) ≤ R/r for all 0 < r ≤ R ≤ 1 and this suffices for d ≥ 3.
The main result of this paper is a scale invariant Harnack inequality for subordinate random walks. The proof will be given in the last section. Before we state the result, we define the notion of harmonic function with respect to subordinate random walk X.
We say that a function f^d[0, ∞⟩ is harmonic in B ⊂^d, with respect to X, if
f(x) = Pf(x) = ∑_y ∈^dp(x, y) f(y), ∀ x ∈ B.
This definition is equivalent to the mean-value property in terms of the exit from a finite subset of ^d: If B ⊂^d is finite then f^d[0, ∞⟩ is harmonic in B, with respect to X, if and only if f(x) = _x[f(X_τ_B)] for every x ∈ B, where τ_B := inf{n ≥ 1 : X_n ∉ B}.
For x ∈^d and r > 0 we define B(x, r) := {y ∈^d : y - x < r}. We use shorthand notation B_r for B(0, r).
[Harnack inequality]
Let X = (X_n)_n ≥ 0 be a subordinate random walk in ^d, d ≥ 1, with ϕ a complete Bernstein function satisfying (<ref>) and in the case d ≤ 2 also (<ref>). For each a < 1, there exists a constant c_a < ∞ such that if f^d[0, ∞⟩ is harmonic on B(x, n), with respect to X, for x ∈^d and n ∈, then
f(x_1) ≤ c_a f(x_2), ∀ x_1, x_2 ∈ B(x, an).
Notice that the constant c_a is uniform for all n ∈. That is why we call this result scale invariant Harnack inequality.
Some authors have already dealt with this problem and Harnack inequality was proved for symmetric simple random walk in ^d <cit.> and random walks with steps of infinite range, but with some assumptions on the Green function and some restrictions such as finite second moment of the step <cit.>.
Notion of discrete subordination was developed in <cit.> and it was discussed in details in <cit.>, but under different assumptions on ϕ than the ones we have. Using discrete subordination we can obtain random walks with steps of infinite second moment, see Remark <ref>. Harnack inequality has not been proved so far for such random walks.
In Section <ref> we state an important result about gamma function that we use later, we discuss under which conditions subordinate random walk is transient and we introduce functions g and j and examine their properties. The estimates of one-step transition probabilities of subordinate random walk are given in Section <ref>. In Section <ref> we derive estimates for the Green function. This is very valuable result which gives the answer to the question posed in <cit.>. Using estimates developed in Section <ref> and <ref> and following <cit.>, in Section <ref> we find estimates for the Green function of a ball. In Section <ref> we introduce Poisson kernel and prove Harnack inequality.
Throughout this paper, d≥ 1 and the constants a_1, a_2, γ_1, γ_2 and C_i, i = 1, 2, …, 9 will be fixed. We use c_1, c_2, … to denote generic constants, whose exact values are not important and can change from one appearance to another. The labeling of the constants c_1, c_2, … starts anew in the statement of each result. The dependence of the constant c on the dimension d will not be mentioned explicitly. We will use “:=” to denote a definition, which is read as “is defined to be”. We will use x to denote the Lebesgue measure in ^d. We denote the Euclidean distance between x and y in ^d by x - y. For a, b ∈, a ∧ b := min{a, b} and a ∨ b := max{a, b}. For any two positive functions f and g, we use the notation f ≍ g, which is read as “f is comparable to g”, to denote that there exist some constants c_1, c_2 > 0 such that c_1 f ≤ g ≤ c_2 f on their common domain of definition. We also use notation f ∼ g to denote that lim_x →∞ f(x) / g(x) = 1.
§ PRELIMINARIES
In this section we first state an important result about the ratio of gamma functions that is needed later. Secondly, we discuss under which conditions subordinate random walk is transient. At the end of this section we define functions g and j that we use later and we prove some of their properties.
§.§ Ratio of gamma functions
Let Γ(x, a) = ∫_a^∞t^x - 1 e^-t t. Then
lim_x →∞Γ(x + 1, x)/Γ(x + 1) = 1/2.
Using a well-known Stirling's formula
Γ(x + 1) ∼√(2 π x) x^x e^-x, x →∞
and <cit.> that states
Γ(x + 1, x) ∼√(2^-1π x) x^x e^-x, x →∞
we get
lim_x →∞Γ(x + 1, x)/Γ(x + 1) = lim_x →∞√(2^-1π x) x^x e^-x/√(2 π x) x^x e^-x = 1/2.
§.§ Transience of subordinate random walks
Our considerations only make sense if the random walk that we defined is transient. In the case of a recurrent random walk, the Green function takes value +∞ for every argument x. We will use Chung-Fuchs theorem to show under which condition subordinate random walk is transient. Denote with φ_X_1 the characteristic function of one step of a subordinate random walk. We want to prove that there exists δ > 0 such that
∫_⟨ -δ, δ⟩^d1/1 - φ_X_1 (θ)θ < +∞.
The law of variable X_1 is given with (<ref>). We denote one step of the simple symmetric random walk (Z_n)_n ≥ 0 with Y_1 and the characteristic function of that random variable with φ. Assuming θ < 1 we have
φ_X_1 (θ)
= e^i θ· X_1 = ∑_x ∈^de^i θ· x∑_m = 1^∞∫_⟨ 0, +∞⟩t^m/m! e^-tμ( t)(Z_m = x)
= ∑_m = 1^∞∫_⟨ 0, +∞⟩t^m/m! e^-tμ( t)∑_x ∈^de^i θ· x(Z_m = x) = ∑_m = 1^∞∫_⟨ 0, +∞⟩t^m/m! e^-tμ( t) (φ(θ))^m
= ∫_⟨ 0, +∞⟩e^t φ(θ) - 1 e^-tμ( t) = ϕ(1) - ϕ(1 - φ(θ)) = 1 - ϕ(1 - φ(θ)).
From <cit.> we have
φ(θ) = 1/d∑_m = 1^dcos(θ_m), θ = (θ_1, θ_2, …, θ_m).
That is function with real values so
∫_⟨ -δ, δ⟩^d1/1 - φ_X_1(θ)θ = ∫_⟨ -δ, δ⟩^d1/ϕ(1 - φ(θ))θ.
From Taylor's theorem it follows that there exists a ≤ 1 such that
φ(θ) = φ(θ) ≤ 1 - 1/4dθ^2, θ∈ B(0, a).
Now we take δ such that ⟨ -δ, δ⟩^d ⊂ B(0, a). From (<ref>), using the fact that ϕ is increasing, we get
1/ϕ1 - φ(θ)≤1/ϕθ^2 / 4d, θ∈ B(0, a).
Hence,
∫_⟨ -δ, δ⟩^d1/ϕ(1 - φ(θ))θ ≤∫_⟨ -δ, δ⟩^d1/ϕθ^2 / 4dθ≤∫_B(0, a)ϕ(θ^2)/ϕθ^2 / 4d1/ϕ(θ^2)θ
≤ a_2 (4d)^γ_2∫_B(0, a)1/ϕ(θ^2)θ = c_1 (4d)^γ_2∫_0^ar^d - 1/ϕ(r^2) r
= c_1 (4d)^γ_2/ϕ(a)∫_0^ar^d - 1ϕ(a)/ϕ(r^2) r≤c_1 a_2 (4ad)^γ_2/ϕ(a)∫_0^ar^d - 2 γ_2 - 1 r
and the last integral converges for d - 2 γ_2 - 1 > -1. So, subordinate random walk is transient for γ_2 < d/2. Notice that in the case when d ≥ 3 we have γ_2 < d/2 even if γ_2 = 1. That is the reason why we do not need (<ref>) in proving results in dimensions higher than or equal to 3. If some particular result depends on the dimension, we will write its proof using (<ref>) just to show that the result is true even in dimensions 1 and 2 when γ_2 < d/2. In the case when d ≥ 3, we can replace a_2 and γ_2 from (<ref>) with 1 and only use Lemma <ref>.
§.§ Properties of functions g and j
Let g⟨ 0, +∞⟩⟨ 0, +∞⟩ be defined by
g(r) = 1/r^d ϕ(r^-2)
and let j⟨ 0, +∞⟩⟨ 0, +∞⟩ be defined by
j(r) = r^-dϕ(r^-2).
We use these functions in numerous places in our paper. In this section we present some of their properties that we need later.
Assume (<ref>) (if d ≤ 2) and let 1 ≤ r ≤ q. Then g(r) ≥ a_2^-1 g(q).
Using (<ref>) and the fact that d > 2γ_2 we have
g(r) = 1/r^d/q^d q^d ϕ(q^-2) ϕ(r^-2)/ϕ(q^-2)≥1/a_2q/r^d - 2γ_2 g(q) ≥1/a_2 g(q).
We prove similar assertion for the function j.
Assume (<ref>) and let 1 ≤ r ≤ q. Then j(r) ≥ a_1 j(q).
Using (<ref>) we have
j(r) = r^-d/q^-d q^-dϕ(q^-2) ϕ(r^-2)/ϕ(q^-2)≥ a_1 q/r^d + 2γ_1 j(q) ≥ a_1 j(q).
Using (<ref>), (<ref>) and Lemma <ref> we can easily prove a lot of different results about functions g and j. We will state only those results that we need in the remaining part of our paper. For the first lemma we do not need any additional assumptions on the function ϕ. For the second one we need (<ref>) and for the third one we need (<ref>).
Let r ≥ 1. If 0 < a ≤ 1 then
j(ar) ≤ a^-d - 2 j(r),
g(ar) ≥ a^-d + 2 g(r).
If a ≥ 1 then
j(ar) ≥ a^-d - 2 j(r).
Assume (<ref>) and let 0 < a ≤ 1 and r ≥ 1 such that ar ≥ 1. Then
g(ar) ≤g(r)/a_1 a^d - 2γ_1.
Assume (<ref>) and let r ≥ 1. If 0 < a ≤ 1 such that ar ≥ 1 then
g(ar) ≥g(r)/a_2 a^d - 2γ_2.
If a ≥ 1 then
g(ar) ≤a_2/a^d - 2γ_2 g(r).
§ TRANSITION PROBABILITY ESTIMATES
In this section, we will investigate the behavior of the expression (X_1 = z). We will prove that (X_1 = z) ≍ j(z), z ≠ 0. First we have to examine the behavior of the expression c_m^ϕ.
Assume (<ref>) and let c_m^ϕ be as in (<ref>). Then
c_m^ϕ≍ϕ(m^-1)/m, m∈.
Since ϕ is a complete Bernstein function, there exists completely monotone density μ(t) such that
c_m^ϕ = ∫_0^∞t^m/m! e^-tμ(t) dt, m∈.
From <cit.> we have
μ(t) ≤ (1 - 2e^-1)^-1 t^-1ϕ(t^-1) = c_1 t^-1ϕ(t^-1), t > 0
and
μ(t) ≥ c_2 t^-1ϕ(t^-1), t ≥ 1.
Inequality (<ref>) holds only if (<ref>) is satisfied and for inequality (<ref>) we do not need any scaling conditions. Using monotonicity of μ, (<ref>) and (<ref>) we have
c_m^ϕ≥μ(m)/m!∫_0^m t^m e^-t dt = μ(m) 1 - Γ(m + 1, m)/Γ(m + 1)≥1/4μ(m) ≥c_2/4ϕ(m^-1)/m
for m large enough. On the other side, using inequality (<ref>), monotonicity of μ and Lemma <ref>, we get
c_m^ϕ ≤1/m!∫_0^m t^m e^-t c_1 ϕ(t^-1)/t dt + μ(m)/m!∫_m^∞t^m e^-t dt
≤c_1/m!ϕ(m^-1) ∫_0^m t^m - 1 e^-tϕ(t^-1)/ϕ(m^-1) dt + μ(m)/m!∫_0^∞t^m e^-t dt
≤ c_1 ϕ(m^-1) 1/Γ(m)∫_0^∞t^m - 2 e^-t dt + μ(m) = c_1 ϕ(m^-1) Γ(m - 1)/Γ(m) + μ(m)
≤ c_1 ϕ(m^-1)/m + c_1 ϕ(m^-1)/m = 2c_1 ϕ(m^-1)/m.
Hence, we have
c_2/4ϕ(m^-1)/m≤ c_m^ϕ≤ 2c_1 ϕ(m^-1)/m
for m large enough, but we can change constants and get (<ref>).
We are now ready to examine the expression (X_1 = z).
Assume (<ref>). Then
(X_1 = z) ≍z^-dϕ(z^-2), z≠ 0.
Using (<ref>) and the fact that (Z_m = z) = 0 for z > m, we have
(X_1 = z) = ∑_m ≥zc_m^ϕ(Z_m = z).
To get the upper bound for the expression (X_1 = z) we will use <cit.> which states that there are constants C' > 0 and C > 0 such that
(Z_m = z) ≤ C' m^-d/2 e^-z^2/C m, ∀ z ∈^d, ∀ m ∈.
Together with (<ref>) this result yields
(X_1 = z)
≤∑_m ≥zc_1 ϕ(m^-1)/m C' m^-d/2 e^-z^2/C m≤ c_2 ∫_z^∞ϕ(t^-1) t^-d/2 - 1 e^-z^2/C t dt
= c_2 ∫_0^z/Cϕ(C sz^-2) z^2/C s^-d/2 - 1 e^-s z^2/C s^2 ds
= c_3 z^-d∫_0^1/Cϕ(C sz^-2) s^d/2 - 1 e^-s ds + ∫_1/C^z/Cϕ(C sz^-2) s^d/2 - 1 e^-s ds
=: c_3 z^-d (I_1(z) + I_2(z)).
Let us first examine I_1(z). Using (<ref>), we get
I_1(z) = ϕ(z^-2) ∫_0^1/Cϕ (C sz^-2)/ϕ(z^-2) s^d/2 - 1 e^-s ds≤ c_4 ϕ(z^-2).
Using Lemma <ref> instead of (<ref>) and replacing the upper limit in the integral by ∞, we get I_2(z) ≤ c_5 ϕ(z^-2). Hence, (X_1 = z) ≤ c_6 z^-dϕ(z^-2).
In finding the matching lower bound for (X_1 = z), periodicity of a simple random walk plays very important role. We write n ↔ x if n and x have the same parity, i.e., if n + x_1 + x_2 + ⋯ + x_d is even. Directly from <cit.>, we get
(Z_m = z) ≥ c_7 m^-d/2 e^-d z^2/2m
for 0 ↔ z ↔ m and z≤ m^α, α < 2/3. In the case when 1 ↔ z ↔ m we have
(Z_m = z) = 1/2d∑_i = 1^d [(Z_m - 1 = z + e_i) + (Z_m - 1 = z - e_i)].
By combining (<ref>) and (<ref>), we can easily get
(Z_m = z) ≥ c_8 m^-d/2 e^-z^2/c m, z≤ m^1/2, 1 ↔ z ↔ m.
We will find lower bound for (X_1 = z) when z ↔ 0 by using (<ref>), the proof when z ↔ 1 being analogous using (<ref>). If z ↔ 0 then (Z_m = z) = 0 for m = 2l - 1, l ∈. Hence,
(X_1 = z)
≥∑_m ≥z^2, m = 2lc_9ϕ(m^-1)/m m^-d/2 e^--dz^2/2m = c_9∑_l ≥z^2 / 2ϕ((2l)^-1)/2l (2l)^-d/2 e^-dz^2/4l
≥ c_10∫_z^2 / 2^∞ϕ((2t)^-1)/2t (2t)^-d/2 e^-dz^2/4t t = c_10/2∫_z^2^∞ϕ(t^-1) t^-d/2 - 1 e^-dz^2/2t t
= c_10/2∫_0^d/2ϕ2s/d z^2d z^2/2s^-d/2 - 1 e^-s d z^2/2s^2 s
= c_11z^-dϕ(z^-2) ∫_0^d/2ϕ2s/dz^-2/ϕ (z^-2) s^d/2 - 1 e^-s s
≥ c_11z^-dϕ(z^-2) ∫_0^d/22/d s^d/2 e^-s ds = c_12z^-dϕ(z^-2),
where in the last line we used Lemma <ref>.
It follows immediately form Proposition <ref> that the second moment of the step X_1 is infinite.
§ GREEN FUNCTION ESTIMATES
The Green function of X is defined by G(x, y) = G(y - x), where
G(y) = [∑_n = 0^∞_{X_n = y}].
Note that for n ≥ 1
(X_n = y)
= (Z_T_n = y) = ∑_m = n^∞(Z_m = y) (T_n = m)
= ∑_m = n^∞∑_m_1 + ⋯ + m_n = mc_m_1^ϕ⋯ c_m_n^ϕ(Z_m = y)
Hence, for y ≠ 0 we have
G(y) = ∑_m = 1^∞∑_n = 1^m∑_m_1 + ⋯ + m_n = mc_m_1^ϕ⋯ c_m_n^ϕ(Z_m = y) = ∑_m = 1^∞c(m) (Z_m = y),
where
c(m) = ∑_n = 1^m∑_m_1 + ⋯ + m_n = mc_m_1^ϕ⋯ c_m_n^ϕ = ∑_n = 0^∞(T_n = m),
and T_n is as before. Now we will investigate the behavior of the sequence c(m). Since ϕ is a complete Bernstein function (hence special), we have
1/ϕ(λ) = ∫_⟨ 0, ∞⟩e^-λ t u(t) t
for some non-increasing function u⟨ 0, ∞⟩⟨ 0, ∞⟩ satisfying ∫_0^1u(t) t < ∞, see <cit.>.
Let c(m) be as in (<ref>). Then
c(m) = 1/m!∫_⟨ 0, ∞⟩t^m e^-t u(t) t, m ∈_0.
We follow the proof of <cit.>. Define M(x) = ∑_m ≤ xc(m), x ∈. The Laplace transformation (M) of the measure generated by M is equal to
(M)(λ)
= ∫_[0, ∞⟩e^-λ x M(x) = ∑_m = 0^∞c(m) e^-λ m = ∑_m = 0^∞e^-λ m∑_n = 0^∞(T_n = m)
= ∑_n = 0^∞∑_m = 0^∞e^-λ m(T_n = m) = ∑_n = 0^∞[e^-λ T_n] = ∑_n = 0^∞[e^-λ R_1]^n = 1/1 - [e^-λ R_1].
Now we calculate [e^-λ R_1]:
[e^-λ R_1]
= ∑_m = 1^∞e^-λ m∫_⟨ 0, ∞⟩t^m/m! e^-tμ( t) = ∫_⟨ 0, ∞⟩e^te^-λ - 1 e^-tμ( t) = 1 - ϕ(1 - e^-λ),
where we used ϕ(1) = 1 in the last equality. Hence, (M)(λ) = 1 / ϕ(1 - e^-λ). Now we define Φ(λ) := 1 / ϕ(λ) and we want to show that
Φ(1 - e^-λ) = ∑_m = 0^∞(-1)^m Φ^(m)(1)/m! e^-λ m.
It is easy to see that
Φ^(m)(λ) = (-1)^m ∫_⟨ 0, ∞⟩t^m e^-λ tu(t) t. Hence,
∑_m = 0^∞(-1)^m Φ^(m)(1)/m! e^-λ m = ∑_m = 0^∞(-1)^m/m! (-1)^m ∫_⟨ 0, ∞⟩t^m e^-t u(t) t e^-λ m = Φ(1 - e^-λ).
Since (M)(λ) = 1/ϕ(1 - e^-λ) = Φ(1 - e^-λ) from calculations (<ref>) and (<ref>) we have
∑_m = 0^∞c(m) e^-λ m = ∑_m = 0^∞1/m!∫_⟨ 0, ∞⟩t^m e^-t u(t) t e^-λ m.
The statement of this lemma follows by the uniqueness of the Laplace transformation.
Assume (<ref>). Then
c(m) ≍1/m ϕ(m^-1), m ∈.
Since ϕ is a complete Bernstein function, using (<ref>) we can obtain
u(t) ≍1/t ϕ(t^-1), t ≥ 1,
where the upper bound is valid even without (<ref>) (see <cit.>) and u is as in Lemma <ref>. Using monotonicity of u, (<ref>) and (<ref>), we get that
c(m) ≥ u(m) 1/m!∫_0^m t^m e^-t t = u(m) 1 - Γ(m + 1, m)/Γ(m + 1)≥1/4 u(m) ≥c_1/m ϕ(m^-1),
for m large enough. Now we will find the upper bound for c(m). Here we use that t ↦ t^m e^-t is unimodal with maximum at m. By splitting the integral and using (<ref>), we have
c(m)
= 1/m!∫_0^m/2t^m e^-t u(t) t + 1/m!∫_m/2^mt^m e^-t u(t) t + 1/m!∫_m^∞t^m e^-t u(t) t
≤c_2 2^-m m^m e^-m/2/√(2 π m) m^m e^-m∫_0^m/2u(t) t + u(m/2) 1/m!∫_m/2^∞t^m e^-t t + u(m) 1/m!∫_m^∞t^m e^-t t
≤ c_2 (2^-1 e^1/2)^m/√(2 π m)∫_0^1u(t) t + c_2 (2^-1 e^1/2)^m/√(2 π m)∫_1^m/2u(t) t + u(m/2) + u(m),
for m large enough. Since 2^-1 e^1/2 < 1, ∫_0^1u(t) t < ∞ and ϕ is increasing, we have
c_2 (2^-1 e^1/2)^m/√(2 π m)∫_0^1u(t) t≤1/m≤1/m ϕ(m^-1)≤1/4c_1 u(m)
for m large enough, where we used (<ref>) in the last inequality. We will estimate the integral ∫_1^m/2u(t) t by mu(m/2). Using (<ref>) and (<ref>) we get
∫_1^m/2u(t) dt
≤ c_3 ∫_1^m/21/tϕ(t^-1) dt = c_3/ϕ(2m^-1)∫_1^m/2ϕ(2m^-1)/ϕ(t^-1) t^-1 dt
≤c_3/ϕ(2m^-1)1/a_1(m/2)^γ_1∫_1^m/2t^γ_1 - 1 dt
≤c_3/a_1ϕ(2m^-1)(m/2)^γ_1(m/2)^γ_1/γ_1 = c_3/a_1γ_11/ϕ(2m^-1).
Since u(m/2) ≥ c_4 / ((m/2)ϕ(2m^-1)) for m ≥ 2, we have 1 / ϕ(2m^-1) ≤ (1 / 2c_4) mu(m/2). Hence,
∫_1^m/2u(t) dt ≤c_3/a_1γ_11/2c_4 mu(m/2) = c_5mu(m/2).
Using this estimate and the fact that 2^-1 e^1/2 is less then 1, we have
c_2 (2^-1 e^1/2)^m/√(2π m)∫_1^m/2u(t) dt ≤ c_6 (2^-1 e^1/2)^m/√(2π m) m u(m/2) = c_6(2^-1 e^1/2)^m m^1/2 u(m/2) ≤ u(m/2)
for m large enough. Now we have to show that u(m/2) can be estimated by u(m). Again, we will use (<ref>) and (<ref>):
u(t/2) ≤c_3/(t/2)ϕ(2t^-1) = 2c_3/tϕ(t^-1)/ϕ(2t^-1)1/ϕ(t^-1)≤2c_3/a_1 2^γ_11/t ϕ(t^-1)≤2c_3/c_4 a_1 2^γ_1 u(t) = c_7 u(t),
where we assumed that t ≥ 2 because we need 2t^-1≤ 1 so that we can use (<ref>). Now, for m large enough, we have
c(m) ≤1/4c_1 u(m) + u(m/2) + u(m/2) + u(m) ≤1/4c_1 u(m) + 2c_7 u(m) + u(m) ≤c_8/m ϕ(m^-1).
Hence,
c_1/m ϕ(m^-1)≤ c(m) ≤c_8/m ϕ(m^-1)
for m large enough. We can now change constants in such a way that the statement of this lemma is true for every m ∈.
Assume (<ref>) and, if d ≤ 2, assume additionally (<ref>). Then
G(x) ≍1/x^d ϕ(x^-2), x≥ 1.
We assume x≥ 1 throughout the whole proof. In (<ref>) we showed that G(x) = ∑_m = 1^∞c(m) p(m, x), where p(m, x) = (Z_m = x). Let q(m, x) = 2 d / (2 π m)^d/2 e^-d x^2/2m and E(m, x) = p(m, x) - q(m, x). By <cit.>
E(m, x)≤ c_1 m^-d/2 / x^2.
Since p(m, x) = 0 for m <x, we have
G(x) = ∑_m > x^2c(m) p(m, x) + ∑_x≤ m ≤x^2c(m) p(m, x) =: J_1(x) + J_2(x).
First we estimate
J_1(x) = ∑_m > x^2c(m) q(m, x) + ∑_m > x^2c(m) E(m, x) =: J_11(x) + J_12(x).
By Lemma <ref>, (<ref>) and (<ref>)
J_12(x) ≤ c_2 ∑_m > x^21/m ϕ(m^-1)m^-d/2/x^2 = c_2/x^2 ϕ(x^-2)∑_m > x^2ϕ(x^-2)/ϕ(m^-1) m^-d/2 - 1
≤c_3 x^-2 γ_2/x^2 ϕ(x^-2)∫_x^2^∞t^γ_2 - d/2 - 1 t = c_4/x^21/x^d ϕ(x^-2).
Now we have
lim_x→∞x^d ϕ(x^-2) J_12(x) = 0.
By Lemma <ref>, (<ref>) and (<ref>)
J_11(x)
≍∫_x^2^∞1/t ϕ(t^-1) t^-d/2 e^-d x^2/2 t t = 1/ϕ(x^-2)∫_x^2^∞ϕ(x^-2)/ϕ(t^-1) t^-d/2 - 1 e^-d x^2/2 t t
≍x^-2 γ_i/ϕ(x^-2)∫_x^2^∞t^γ_i - d/2 - 1 e^-d x^2/2 t t = 1/x^d ϕ(x^-2)∫_0^d/2s^d/2 -γ_i - 1 e^-s s≍1/x^d ϕ(x^-2),
where the last integral converges because of the condition γ_2 < d/2. We estimate J_2(x) using (<ref>) and (<ref>):
J_2(x)
≤ c_5 ∫_x^x^2t^-d/2 - 1/ϕ(t^-1) e^-x^2/C t t = c_5/ϕ(x^-2)∫_x^x^2ϕ(x^-2)/ϕ(t^-1) t^-d/2 - 1 e^-x^2/C t t
≤c_5 x^-2 γ_1/a_1 ϕ(x^-2)∫_x^x^2t^γ_1 - d/2 - 1 e^-x^2/C t t = c_5 x^-2 γ_1/a_1 ϕ(x^-2)∫_1/C^x/Cx^2/Cs^γ_1 - d/2 - 1 e^-sx^2/C s^2 s
≤c_6/x^d ϕ(x^-2)∫_0^∞s^d/2 - γ_1 - 1 e^-s s = c_7/x^d ϕ(x^-2).
Using J_11(x) ≥ (2 c_8) / (x^dϕ(x^-2)) and J_12(x) x^d ϕ(x^-2) ≥ - c_8 for x large enough and for some constant c_8 > 0, we get
G(x) x^d ϕ(x^-2) ≥ J_11(x) x^d ϕ(x^-2) + J_12(x) x^d ϕ(x^-2) ≥ 2c_8 - c_8 = c_8
On the other hand
G(x) x^d ϕ(x^-2) ≤ c_9 + J_12(x) x^d ϕ(x^-2) + c_7 ≤ 2 c_9 + c_7 = c_10.
Here we used J_11(x) ≤ c_9 / (x^d ϕ(x^-2)), J_2(x) ≤ c_7 / (x^d ϕ(x^-2)) and J_12(x) x^d ϕ(x^-2)
≤ c_9 for x large enough and for some constant c_9 > 0. So, we have c_8 ≤ G(x) x^d ϕ(x^-2) ≤ c_10 for x large enough. Now we can change the constants c_8 and c_10 to get
G(x) ≍1/x^d ϕ(x^-2), for all x≥ 1.
§ ESTIMATES OF THE GREEN FUNCTION OF A BALL
Let B ⊂^d and define
G_B (x, y) = _x ∑_n = 0^τ_B - 1_{X_n = y}
where τ_B is as before. A well-known result about Green function of a set is formulated in the following lemma.
Let B be a finite subset of ^d. Then
G_B (x, y) = G(x, y) - _x G(X_τ_B, y), x, y ∈ B,
G_B (x, x) = 1/_x (τ_B < σ_x), x∈ B,
where σ_x = inf{n ≥ 1 : X_n = x}.
Our approach in obtaining estimates for the Green function of a ball uses the maximum principle for the operator A that we define by
(Af)(x) := ((P - I)f)(x) = (Pf)(x) - (If)(x) = ∑_y ∈^dp(x, y) f(y) - f(x).
Since ∑_y ∈^dp(x, y) = 1 and p(x, y) = (X_1 = y - x) we have
(Af)(x) = ∑_y ∈^d(X_1 = y - x) (f(y) - f(x)).
Before proving the maximum principle, we will show that for the function η(x) := _x [τ_B_n] we have (Aη)(x) = -1, for all x ∈ B_n. Let x ∈ B_n. Then
η(x) = ∑_y ∈^d_x[τ_B_n| X_1 = y] _x(X_1 = y) = ∑_y ∈^d(1 + _y[τ_B_n]) (X_1 = y - x) = 1 + (Pη)(x)
and this is obviously equivalent to (Aη)(x) = -1, for all x ∈ B_n. It follows from the Definition <ref> that f is harmonic in B ⊂^d if and only if (Af)(x) = 0, for all x ∈ B.
Assume that there exists x ∈^d such that (Af)(x) < 0. Then
f(x) > inf_y ∈^d f(y).
If (<ref>) is not true, then f(x) ≤ f(y), for all y ∈^d. In this case, we have
(Pf)(x) = ∑_y ∈^d(X_1 = y - x) f(y)≥ f(x) ∑_y ∈^d(X_1 = y - x) = f(x).
This implies (Af)(x) = (Pf)(x) - f(x) ≥ 0 which is in contradiction with the assumption that (Af)(x) < 0.
We will now prove a series of lemmas and propositions in order to get the estimates for the Green function of a ball. In all those results we assume (<ref>) and, if d ≤ 2, we additionally assume (<ref>). Throughout the rest of this section, we follow <cit.>.
There exist a ∈⟨ 0, 1/3 ⟩ and C_1 > 0 such that for every n ∈
G_B_n(x, y) ≥ C_1 G(x, y), ∀ x, y ∈ B_an.
From Lemma <ref> we have
G_B_n (x, y) = G(x, y) - _x[G(X_τ_B_n, y)].
We will first prove this lemma in the case when x ≠ y. If we show that _x[G(X_τ_B_n, y)] ≤ c_1 G(x, y) for some c_1 ∈⟨ 0, 1 ⟩ we will have (<ref>) with the constant c_2 = 1 - c_1. Let a ∈⟨ 0, 1/3 ⟩ and x, y ∈ B_an. In that case, we have x - y≤ 2an. Since X_τ_B_n∉ B_n, x ≠ y and (1 - a) / (2a) > 1 if and only if a < 1/3, we have
y - X_τ_B_n≥ (1 - a)n = 1 - a/2a 2an ≥1 - a/2ax - y≥ 1.
Using Theorem <ref>, (<ref>), Lemma <ref> and (<ref>), we get
G(X_τ_B_n, y)
≍ g(y - X_τ_B_n) ≤ a_2 g1 - a/2ax - y
≤ a_2^2 2a/1 - a^d - 2γ_2 g(x - y) ≍ a_2^2 2a/1 - a^d - 2γ_2 G(x, y).
Since 2a / (1 - a) ⟶ 0 when a → 0 and d > 2 γ_2, if we take a small enough and then fix it, we have _x[G(X_τ_B_n, y)] ≤ c_1 G(x, y) for c_1 ∈⟨ 0, 1 ⟩ and that is what we wanted to prove. Now we deal with the case when x = y. From Lemma <ref> we have G_B_n(x, x) = ((τ_B_n < σ_x))^-1 and from the definition of the function G and the transience of random walk we get G(x, x) = G(0) ∈ [1, ∞⟩. Now, we can conclude that
G_B_n(x, x) ≥ 1 = (G(0))^-1 G(0) = (G(0))^-1 G(x, x).
If we define C_1 := min{c_2, (G(0))^-1} we have (<ref>).
Using Lemma <ref> we can prove the following result:
There exists constant C_2 > 0 such that for all n ∈
_x[τ_B_n] ≥C_2/ϕ(n^-2), ∀ x ∈ B_an/2,
where a ∈⟨ 0, 1/3 ⟩ is as in Lemma <ref>.
Let x ∈ B_an/2. In that case, we have B(x, an/2) ⊆ B_an. We set b = a/2 for easier notation. Notice that _x[τ_B_n] = ∑_y ∈ B_n G_B_n(x, y). Using this equality, Lemma <ref>, Theorem <ref> and Lemma <ref>, we have
_x[τ_B_n]
≥∑_y ∈ B(x, bn)G_B_n (x, y)≥∑_y ∈ B(x, bn) ∖{x}C_1 G(x, y)≍∑_y ∈ B(x, bn) ∖{x}g(x - y)
≍∫_1^bng(r) r^d - 1 r = ∫_1^bn1/r ϕ(r^-2) r = 1/ϕ(n^-2)∫_1^bn1/rϕ(n^-2)/ϕ(r^-2) r
≥1/a_2 ϕ(n^-2) n^2 γ_2∫_1^bnr^2 γ_2 - 1 r = 1/2a_2 γ_2 ϕ(n^-2)b^2 γ_2 - 1/n^2 γ_2≥b^2 γ_2/4 a_2 γ_2 ϕ(n^-2),
for n large enough. Hence, we can conclude that _x[τ_B_n] ≥ C_2 / ϕ(n^-2), for all x ∈ B_an/2, for n large enough and for some C_2 > 0. As usual, we can adjust the constant to get the statement of this proposition for every n ∈. Notice that this is true regardless of the dimension because here, we can always plug in γ_2 = 1.
Now we want to find the upper bound for _x [τ_B_n].
There exists constant C_3 > 0 such that for all n ∈
_x[τ_B_n] ≤C_3/ϕ(n^-2), ∀ x ∈ B_n.
We define the process M^f = (M_n^f)_n ≥ 0 with
M_n^f := f(X_n) - f(X_0) - ∑_k = 0^n - 1(Af)(X_k)
where f is a function defined on ^d with values in , A is defined as in (<ref>) and X = (X_n)_n ≥ 0 is a subordinate random walk. By <cit.>, the process M^f is a martingale for every bounded function f. Let f := _B_2n and x ∈ B_n. By the optional stopping theorem, we have
_x [M_τ_B_n^f] = _x f(X_τ_B_n) - f(X_0) - ∑_k = 0^τ_B_n - 1(Af)(X_k) = _x [M_0^f] = 0.
Hence
_x f(X_τ_B_n) - f(X_0) = _x ∑_k = 0^τ_B_n - 1(Af)(X_k).
We now investigate both sides of the relation (<ref>). For every k < τ_B_n, X_k ∈ B_n, and for every y ∈ B_n, using Proposition <ref>, (<ref>) and (<ref>), we have
(Af)(y)
= ∑_u ∈^d(X_1 = u - y) (f(u) - f(y))≍ -∑_u ∈ B_2n^cu - y^-dϕ(u - y^-2)
≍ -∫_n^∞r^-dϕ(r^-2) r^d - 1 r = -ϕ(n^-2) ∫_n^∞r^-1ϕ(r^-2)/ϕ(n^-2) r
≍ -ϕ(n^-2) ∫_n^∞r^-1n^2γ_i/r^2γ_i r = -ϕ(n^-2) n^2γ_in^-2γ_i/2γ_i≍ -ϕ(n^-2).
Using the above estimate, we get
_x ∑_k = 0^τ_B_n - 1(Af)(X_k)≍_x -∑_k = 0^τ_B_n - 1ϕ(n^-2) = -ϕ(n^-2) _x[τ_B_n].
Using (<ref>), (<ref>) and _x [f(X_τ_B_n) - f(X_0)] = _x (X_τ_B_n∈ B_2n) - 1 = -_x(X_τ_B_n∈ B_2n^c), we get
_x (X_τ_B_n∈ B_2n^c) ≍ϕ(n^-2) _x[τ_B_n]
and this implies
_x [τ_B_n] ≤C_3 _x(X_τ_B_n∈ B_2n^c)/ϕ(n^-2)≤C_3/ϕ(n^-2).
In the next two results we develop estimates for the Green function of a ball. We define A(r, s) := {x ∈^d : r ≤x < s} for r, s ∈, 0 < r < s.
There exists constant C_4 > 0 such that for all n ∈
G_B_n(x, y) ≤ C_4n^-dη(y), ∀ x ∈ B_bn/2, y ∈ A(bn, n),
where η(y) = _y[τ_B_n], b = a/2, and a ∈⟨ 0, 1/3 ⟩ is as in Lemma <ref>.
Let x ∈ B_bn/2 and y ∈ A(bn, n). We define function h(z) := G_B_n (x, z). Notice that for z ∈ B_n ∖{x} we have
h(z) = G_B_n(x, z) = G_B_n(z, x) = ∑_y ∈^d(X_1 = y - z) G_B_n (y, x) = ∑_y ∈^d(X_1 = y - z) h(y).
Hence, h is a harmonic function in B_n ∖{x}. If we take z ∈ B(x, bn/8)^c then z - x≥ bn/8 ≥ 1 for n large enough. Using Lemma <ref> and Theorem <ref> we get
g(bn/8) ≥ a_2^-1 g(z - x) ≍ G(x, z) ≥ G_B_n(x, z) = h(z).
Hence, h(z) ≤ kg(bn/8) for z ∈ B(x, bn/8)^c and for some constant k > 0. Notice that A(bn, n) ⊆ B(x, bn/8)^c, hence y ∈ B(x, bn/8)^c. Using these facts together with Proposition <ref>, we have
A
(h ∧ kg(bn/8))(y) = A(h ∧ kg(bn/8) - h)(y)
= ∑_v ∈^d(X_1 = v - y) (h(v) ∧ kg(bn/8) - h(v) - h(y) ∧ kg(bn/8) + h(y))
≍∑_v ∈ B(x, bn/8)j(v - y) (h(v) ∧ kg(bn/8) - h(v))≥ -∑_v ∈ B(x, bn/8)j(v - y) h(v)
≥ -∑_v ∈ B(x, bn/8)a_1^-1 j(bn/8) h(v) = -a_1^-1 j(bn/8) ∑_v ∈ B(x, bn/8)G_B_n(x, v)≥ -a_1^-1 j(bn/8) η(x),
where in the last line we used Lemma <ref> together with v - y≥ bn/8 ≥ 1 for v ∈ B(x, bn/8) and for n large enough. Using (<ref>) we get j(bn/8) ≤ (b/8)^-d-2 j(n). Hence, using (<ref>), we have
A(h ∧ kg(bn/8))(y) ≥ -c_1 n^-dϕn^-2η(x) ≥ -c_1 n^-dϕn^-2 C_3ϕn^-2^-1 = -c_2 n^-d
for some c_2 > 0. On the other hand, using (<ref>) and Proposition <ref> we get
g(bn/8)
≤ a_1^-1 (bn/8)^-d + 2γ_1 g(n) ≤ (a_1 C_2)^-1 (bn/8)^-d + 2γ_1 n^-dη(z) = c_3n^-dη(z), ∀ z ∈ B_bn.
Now we define C_4 := (c_2 ∨ kc_3) + 1 and using
h(z) ∧ kg(bn/8) ≤ kg(bn/8) ≤ kc_3n^-dη(z)
we get
C_4n^-dη(z) - h(z) ∧ kg(bn/8) ≥ (C_4 - kc_3)n^-dη(z) ≥ 0, ∀ z ∈ B_bn
So, if we define u(·) := C_4n^-dη(·) - h(·) ∧ kg(bn/8), we showed that u is non-negative function on B_bn. It is obvious that it vanishes on B_n^c and for y ∈ A(bn, n) we have
(Au)(y) = C_4n^-d (Aη)(y) - A(h ∧ kg(bn/8))(y) ≤ -C_4n^-d + c_2n^-d < 0.
Since u ≥ 0 on B_bn and u vanishes on B_n^c, if inf_y ∈^du(y) < 0 then there would exist y_0 ∈ A(bn, n) such that u(y_0) = inf_y ∈^du(y). But then, by Proposition <ref>, (Au)(y_0) ≥ 0 which is in contradiction with (Au)(y) < 0 for y ∈ A(bn, n). Hence,
u(y) = C_4n^-dη(y) - h(y) ∧ kg(bn/8) ≥ 0, ∀ y ∈^d
and then, because h(y) ≤ kg(bn/8) for y ∈ A(bn, n) we get
G_B_n(x, y) = h(y) ≤ C_4n^-dη(y), ∀ x ∈ B_bn/2, y ∈ A(bn, n).
Now we will prove a proposition that will give us the lower bound for the Green function of a ball. We use the fact that B_n ∩^d≥ c n^d for some constant c > 0, where · denotes the cardinality of a set.
There exist C_5 > 0 and b ≤ a/4 such that for all n ∈
G_B_n(x, y) ≥ C_5n^-dη(y), ∀ x ∈ B_bn, y ∈ A(an/2, n),
where a is as in Lemma <ref> and η(y) = _y[τ_B_n].
Let a ∈⟨ 0, 1/3 ⟩ as in Lemma <ref>. Then there exists C_1 > 0
G_B_n (x, v) ≥ C_1G(x, v), x, v ∈ B_an.
From Proposition <ref> it follows that there exists constant C_4 > 0 such that
G_B_n(x, v) ≤ C_4n^-dη(v), x ∈ B_an/4, v ∈ A(an/2, n).
From Lemma <ref> we have
η(v) ≤C_3/ϕn^-2, v∈ B_n,
for some constant C_3 > 0. By Theorem <ref> and (<ref>) there exists c_1 > 0 such that G(x) ≥ c_1 g(x), x ≠ 0. Now we take
b ≤mina/4, C_1 c_1/2 a_2^2 C_3 C_4^1/d - 2γ_2
and fix it. Notice that (C_1 c_1) / (a_2^2 C_3 b^d - 2γ_2) ≥ 2C_4. Let x ∈ B_bn, v ∈ B(x, bn). Since b ≤ a/4 we have x, v ∈ B_an. We want to prove that G_B_n(x, v) ≥ 2C_4 n^-dη(v). We will first prove that assertion for x ≠ v. In that case we have 1 ≤x - v. Since v ∈ B(x, bn), we have x - v≤ bn so we can use (<ref>), Lemma <ref> and (<ref>) to get
G_B_n(x, v) ≥ C_1 G(x, v) ≥C_1 c_1/a_2g(bn) ≥C_1 c_1/a_2^2 b^d - 2γ_2 g(n) ≥2C_3 C_4/n^d ϕ(n^-2).
Using (<ref>) and (<ref>), we get G_B_n(x, v) ≥ 2C_4 n^-dη(v) for x ≠ v. Now we will prove that G_B_n(x, x) ≥ 2C_4 n^-dη(x), for x ∈ B_bn and for n large enough. First note that
lim_n →∞n^d ϕ(n^-2) = lim_n →∞n^d ϕ(n^-2)/ϕ(1)≥lim_n →∞n^d 1/a_2 n^2 γ_2 = lim_n →∞1/a_2 n^d - 2γ_2 = ∞,
since d - 2γ_2 > 0. Therefore
2 C_4 n^-dη(x) ≤2 C_4 C_3/n^d ϕ(n^-2)≤ 1 ≤ G_B_n(x, x)
for n large enough. Hence,
C_4n^-dη(v) ≤1/2G_B_n(x, v), ∀ x ∈ B_bn, v ∈ B(x, bn).
Now we fix x ∈ B_bn and define the function
h(v) := G_B_n(x, v) ∧C_4n^-dη(v).
From (<ref>) we have h(v) ≤1/2G_B_n(x, v) for v ∈ B(x, bn). Recall that G_B_n(x, ·) is harmonic in A(an/2, n). Using (<ref>) we get h(y) = G_B_n(x, y) for y ∈ A(an/2, n). Hence, for y ∈ A(an/2, n)
(Ah)(y)
= A(h(·) - G_B_n(x, ·))(y) ≍∑_v ∈^dj(v - y) h(v) - G_B_n(x, v) - h(y) + G_B_n(x, y)
≤∑_v ∈ B(x, bn)j(v - y)h(v) - G_B_n(x, v)≤ -(a_1/2) j(2n) ∑_v ∈ B(x, bn)G_B_n(x, v),
where we used Proposition <ref> and Lemma <ref> together with 1 ≤v - y≤ 2n. Using (<ref>) and B_n ∩^d≥ c_2 n^d, we get
∑_v ∈ B(x, bn)G_B_n(x, v)≥2C_3 C_4/n^d ϕn^-2B_bn∩^d≥2c_2 C_3 C_4/n^d ϕn^-2 (bn)^d = c_3/ϕn^-2.
Using (<ref>) we get j(2n) ≥ 2^-d - 2j(n). When we put this together with (<ref>) and (<ref>), we get
(Ah)(y) ≤ -c_4n^-d.
Define u := h - κη, where
κ := minc_4/2, c_5/2, C_4/2n^-d,
where c_5 > 0 will be defined later. For y ∈ A(an/2, n)
(Au)(y) = (Ah)(y) - κ(Aη)(y) ≤ -c_4n^-d + κ≤ -c_4n^-d + c_4/2 n^-d = -c_4/2n^-d < 0.
For x ∈ B_bn⊆ B_an/2, v ∈ B_an/2 we have x - v≤ an ≤ n. We will first assume that x ≠ v so that we can use Theorem <ref>, Lemma <ref> and (<ref>). In this case, we have
G_B_n(x, v)
≥ C_1G(x, v) ≍ g(x - v) ≥1/a_2g(an) ≥1/a_2^2 a^d - 2γ_2 g(n) ≥1/a_2^2 C_3 a^d - 2γ_2n^-dη(v).
So, G_B_n(x, v) ≥ c_5 n^-dη(v) for some constant c_5 > 0 and for x ≠ v. If x = v we can use the same arguments that we used when we were proving that G_B_n(x, x) ≥ 2C_4 n^-dη(x) for n large enough to prove that G_B_n(x, x) ≥ c_5 n^-dη(x) for n large enough. Hence, G_B_n(x, v) ≥ c_5 n^-dη(v) for all x ∈ B_bn and v ∈ B_an/2 and for n large enough. Now we have
h(v) = G_B_n(x, v) ∧C_4n^-dη(v)≥c_5n^-dη(v)∧C_4n^-dη(v) = (C_4 ∧ c_5) n^-dη(v).
Hence,
u(v) = h(v) - κη(v) ≥ (C_4 ∧ c_5) n^-dη(v) - C_4/2∧c_5/2 n^-dη(v) ≥ 0.
Since u(v) ≥ 0 for v ∈ B_an/2, u(v) = 0 for v ∈ B_n^c and (Au)(v) < 0 for v ∈ A(an/2, n) we can use the same argument as in Proposition <ref> to conclude by Proposition <ref> that u(y) ≥ 0 for all y ∈^d. Since G_B_n(x, y) ≤ C_4 n^-dη(y) for x ∈ B_an/4, y ∈ A(an/2, n) we have h(y) = G_B_n(x, y). Using that, we have
G_B_n(x, y) ≥κη(y) = C_5n^-dη(y), x∈ B_bn, y ∈ A(an/2, n),
for n large enough. As before, we can change the constant and get (<ref>) for all n ∈.
Using last two propositions, we have the next corollary.
Assume (<ref>) and (<ref>). Then there exist constants C_6, C_7 > 0 and b_1, b_2 ∈⟨ 0, 1/2⟩, 2b_1 ≤ b_2 such that
C_6 n^-d_y[τ_B_n] ≤ G_B_n(x, y) ≤ C_7 n^-d_y[τ_B_n], ∀ x ∈ B_b_1n, y ∈ A(b_2n, n).
This corollary follows directly from Proposition <ref> and Proposition <ref>. We can set b_2 = a/2 where a ∈⟨ 0, 1/3 ⟩ is as in Lemma <ref> and b_1 = b where b ≤ a/4 is as in Proposition <ref>.
§ PROOF OF THE HARNACK INEQUALITY
We start this section with the proof of the proposition that will be crucial for the remaining part of our paper.
Let f^d ×^d[0, ∞⟩ be a function and B ⊂^d a finite set. For every x ∈ B we have
_x f(X_τ_B - 1, X_τ_B) = ∑_y ∈ BG_B (x, y) f(y, y + X_1) _{y + X_1 ∉ B}.
We have
_x f(X_τ_B - 1, X_τ_B) = ∑_y ∈ B, z ∈ B^c_x (X_τ_B - 1 = y, X_τ_B = z) f(y, z).
Using (<ref>), we get
_x (X_τ_B - 1 = y, X_τ_B = z) = ∑_m = 1^∞_x (X_τ_B - 1 = y, X_τ_B = z, τ_B = m)
= ∑_m = 1^∞ (x + X_m - 1 + ξ_m = z, x + X_m - 1 = y, X_1, …, X_m - 2∈ B - x)
= ∑_m = 1^∞ (ξ_m = z - y) (x + X_m - 1 = y, X_1, …, X_m - 2∈ B - x)
= (ξ_1 = z - y) ∑_m = 1^∞_x (X_m - 1 = y, X_1, …, X_m - 2∈ B)
= (X_1 = z - y) ∑_m = 1^∞_x (X_m - 1 = y, τ_B > m - 1) = (X_1 = z - y) G_B (x, y).
Hence,
_x f(X_τ_B - 1, X_τ_B) = ∑_y ∈ B, z ∈ B^cf(y, z) G_B (x, y) (y + X_1 = z)
= ∑_y ∈ BG_B (x, y) f(y, y + X_1) _{y + X_1 ∉ B}.
Formula (<ref>) can be considered as a discrete counterpart of the continuous-time Ikeda-Watanabe formula. We will refer to it as discrete Ikeda-Watanabe
formula.
It can be proved that if f^d[0, ∞⟩ is harmonic in B, with respect to X, then {f(X_n ∧τ_B) : n ≥ 0} is a martingale with respect to the natural filtration of X (proof is the same as <cit.>, except that we have a non-negative instead of a bounded function). Using this fact, we can prove the following lemma.
Let B be a finite subset of ^d. Then f^d[0, ∞⟩ is harmonic in B, with respect to X, if and only if f(x) = _x[f(X_τ_B)] for every x ∈ B.
Let us first assume that f^d[0, ∞⟩ is harmonic in B, with respect to X. We take arbitrary x ∈ B. By the martingale property f(x) = _x[f(X_n ∧τ_B)], for all n ≥ 1. First, by Fatou's lemma we have _x[f(X_τ_B)] ≤ f(x) so f(X_τ_B) is a _x-integrable random variable. Since B is a finite set, we have f ≤ M on B, for some constant M > 0, and _x(τ_B < ∞) = 1. Using these two facts, we get
f(X_n ∧τ_B) = f(X_n) _{n < τ_B} + f(X_τ_B) _{τ_B ≤ n}≤ M + f(X_τ_B).
Since the right hand side is _x-integrable, we can use the dominated convergence theorem and we get
f(x) = lim_n →∞_x[f(X_n ∧τ_B)] = _x[lim_n →∞f(X_n ∧τ_B)] = _x[f(X_τ_B)].
On the other hand, if f(x) = _x[f(X_τ_B)], for every x ∈ B, then for x ∈ B we have
f(x) = ∑_y ∈^d_x [ f(X_τ_B) | X_1 = y ] _x(X_1 = y) = ∑_y ∈^dp(x, y) _y[f(X_τ_B)] = ∑_y ∈^dp(x, y) f(y).
Hence, if we take B ⊂^d finite and f^d[0, ∞⟩ harmonic in B, with respect to X, then by Lemma <ref> and the discrete Ikeda-Watanabe formula, we get
f(x) = _x f(X_τ_B) = ∑_y ∈ BG_B (x, y) f(y + X_1) _{y + X_1 ∉ B}.
Let us define the discrete Poisson kernel of a finite set B ⊂^d by
K_B(x, z):= ∑_y ∈ BG_B (x, y) (X_1 = z - y) , x ∈ B, z ∈ B^c.
If the function f is non-negative and harmonic in B_n, with respect to X, from (<ref>) we have
f(x)
= ∑_y ∈ B_nG_B_n (x, y) ∑_z ∉ B_nf(y + X_1) _{y + X_1 ∉ B_n}| X_1 = z - y(X_1 = z - y)
= ∑_z ∉ B_n∑_y ∈ B_nG_B_n (x, y) f(y + z - y) _{y + z - y ∉ B_n}(X_1 = z - y)
= ∑_z ∉ B_nf(z) ∑_y ∈ B_nG_B_n (x, y) (X_1 = z - y) = ∑_z ∉ B_nf(z) K_B_n(x, z).
Now we are ready to show that the Poisson kernel K_B_n(x, z) is comparable to an expression that is independent of x. When we prove that, Harnack inequality will follow immediately.
Assume (<ref>) and let b_1, b_2 ∈⟨ 0, 1/2⟩ be as in Corollary <ref>. Then K_B_n(x, z) ≍ l(z) for all x ∈ B_b_1n, where
l(z) = j(z)/ϕn^-2 + n^-d∑_y ∈ A(b_2n, n)_y[τ_B_n] j(z - y).
Splitting the expression (<ref>) for the Poisson kernel in two parts and using Proposition <ref>, we get
K_B_n(x, z) ≍∑_y ∈ B_b_2nG_B_n(x, y) j(z - y) + ∑_y ∈ A(b_2n, n)G_B_n(x, y) j(z - y).
Since G_B_n(x, y) ≍ n^-d_y[τ_B_n] for x ∈ B_b_1n, y ∈ A(b_2n, n), for the second sum in the upper expression we have
∑_y ∈ A(b_2n, n)G_B_n(x, y) j(z - y)≍ n^-d∑_y ∈ A(b_2n, n)_y[τ_B_n] j(z - y).
Now we look closely at the expression ∑_y ∈ B_b_2nG_B_n(x, y) j(z - y). Using the fact that y ∈ B_b_2n, b_2 ∈⟨ 0, 1/2⟩ and z≥ n because z ∈ B_n^c, we have
z - y≤z + y≤z + b_2n ≤z + b_2z≤ (1 + b_2)z≤ 2z.
On the other hand
z≤z - y + y≤z - y + b_2n ≤z - y + b_2z.
Hence,
1/2z≤ (1 - b_2)z≤z - y.
Combining (<ref>), (<ref>) and using Lemma <ref>, we have
1/a_1 j1/2z≥ j(z - y) ≥ a_1 j(2z).
Using (<ref>), we get j(1/2z) ≤ 2^d + 2j(z) = c_1 j(z). Similarly, from (<ref>), we get j(2z) ≥ 2^-d -2 j(z) = c_2 j(z). Hence, a_1 c_2 j(z) ≤ a_1 j(2z) ≤ j(z - y) ≤ a_2^-1 j1/2z≤ a_2^-1 c_1 j(z) for some c_1, c_2 > 0. Therefore,
j(z - y) ≍ j(z), y ∈ B_b_2n, z ∈ B_n^c.
Using (<ref>) we have
∑_y ∈ B_b_2nG_B_n(x, y) j(z - y)≍∑_y ∈ B_b_2nG_B_n(x, y) j(z) = j(z) ∑_y ∈ B_b_2nG_B_n(x, y).
Now we want to show that ∑_y ∈ B_b_2nG_B_n(x, y)≍ 1 / ϕn^-2. Using the fact that G_B_n is non-negative function and that _x[τ_B_n] ≤ C_3 / ϕn^-2 for x ∈ B_n we have
∑_y ∈ B_b_2nG_B_n(x, y)≤∑_y ∈ B_nG_B_n(x, y) = _x[τ_B_n] ≤C_3/ϕn^-2.
To prove the other inequality we will use Lemma <ref>, Theorem <ref>, Lemma <ref> together with 1 ≤x - y≤ 2b_2n, B_n ∩^d≥ c_3n^d and Lemma <ref>. Thus
∑_y ∈ B_b_2nG_B_n(x, y) ≥ C_1∑_y ∈ B_b_2n∖{x}G(x, y)≍∑_y ∈ B_b_2n∖{x}g(x - y)
≥1/a_2 (B_b_2n∩^d - 1) g(2b_2n) ≥1/a_2c_3/2(b_2n)^d 1/2^d (b_2n)^d1/ϕn^-2ϕ(n^-2)/ϕ((2b_2n)^-2)
≥c_3/2 a_21/2^d ϕ(n^-2) (2b_2)^2 ≥c_3 (2b_2)^2/2^d + 1 a_21/ϕn^-2.
Hence,
∑_y ∈ B_b_2nG_B_n(x, y)≥c_4/ϕn^-2.
From (<ref>) and (<ref>) we have
∑_y ∈ B_b_2nG_B_n(x, y)≍1/ϕn^-2.
Finally, using (<ref>) and (<ref>) we have
∑_y ∈ B_b_2nG_B_n(x, y) j(z - y)≍j(z)/ϕn^-2.
And now, from (<ref>) and (<ref>) we have the statement of the lemma.
Lemma <ref> basically states that there exist constants C_8, C_9 > 0 such that
C_8 l(z) ≤ K_B_n(x, z) ≤ C_9 l(z), x ∈ B_b_1n, z ∈ B_n^c.
Now we are ready to prove our main result.
[Proof of Theorem <ref>]
Notice that, because of the spatial homogeneity, it is enough to prove this result for balls centered at the origin. We will prove the theorem for a = b_1, where b_1 is as in Corollary <ref>. General case follows using the standard Harnack chain argument. Let x_1, x_2 ∈ B_b_1n. Using (<ref>) we get
K_B_n(x_1, z) ≤ C_9 l(z) = C_9/C_8 C_8 l(z) ≤C_9/C_8 K_B_n(x_2, z).
Now we can multiply both sides with f(z) ≥ 0 and sum over all z ∉ B_n and we get
∑_z ∉ B_nf(z) K_B_n(x_1, z)≤C_9/C_8∑_z ∉ B_nf(z) K_B_n(x_2, z).
If we look at the expression (<ref>) we see that this means
f(x_1) ≤C_9/C_8 f(x_2)
and that is what we wanted to prove.
Acknowledgement: This work has been supported in part by Croatian Science Foundation under the project 3526.
babamspl
Ante Mimica, 20-Jan-1981 - 9-Jun-2016, <https://web.math.pmf.unizg.hr/ amimica/>
Stjepan Šebek, Faculty of Electrical Engineering and Computing, University of Zagreb, 10000 Zagreb, Croatia
E-mail address: stjepan.sebek@fer.hr
|
http://arxiv.org/abs/1701.07426v1 | 20170125185815 | The Nielsen-Ninomiya theorem, PT-invariant non-Hermiticity and single 8-shaped Dirac cone | [
"M. N. Chernodub"
] | cond-mat.mes-hall | [
"cond-mat.mes-hall",
"hep-lat",
"hep-th"
] | |
http://arxiv.org/abs/1701.07696v2 | 20170126133643 | Identifying Consistent Statements about Numerical Data with Dispersion-Corrected Subgroup Discovery | [
"Mario Boley",
"Bryan R. Goldsmith",
"Luca M. Ghiringhelli",
"Jilles Vreeken"
] | cs.AI | [
"cs.AI",
"cs.DB"
] |
3D Printing of Polymer Bonded Rare-Earth Magnets With a Variable Magnetic Compound Density for a Predefined Stray Field
D. Suess
December 30, 2023
=======================================================================================================================
Existing algorithms for subgroup discovery with numerical targets do not optimize the error or target variable dispersion of the groups they find. This often leads to unreliable or inconsistent statements about the data, rendering practical applications, especially in scientific domains, futile. Therefore, we here extend the optimistic estimator framework for optimal subgroup discovery to a new class of objective functions: we show how tight estimators can be computed efficiently for all functions that are determined by subgroup size (non-decreasing dependence), the subgroup median value, and a dispersion measure around the median (non-increasing dependence). In the important special case when dispersion is measured using the mean absolute deviation from the median, this novel approach yields a linear time algorithm. Empirical evaluation on a wide range of datasets shows that, when used within branch-and-bound search, this approach is highly efficient and indeed discovers subgroups with much smaller errors.
§ INTRODUCTION
Subgroup discovery is a well-established KDD technique (<cit.>; see <cit.> for a recent survey) with applications, e.g., in Medicine <cit.>, Social Science <cit.>, and Materials Science <cit.>. In contrast to global modeling, which is concerned with the complete characterization of some variable defined for a given population, subgroup discovery aims to detect intuitive descriptions or selectors of subpopulations in which, locally, the target variable takes on a useful distribution.
In scientific domains, like the ones mentioned above,
such local patterns are typically considered useful if they are not too specific (in terms of subpopulation size) and indicate insightful facts about the underlying physical process that governs the target variable.
Such facts could for instance be: `patients of specific demographics experience a low response to some treatment' or `materials with specific atomic composition exhibit a high thermal conductivity'.
For numeric (metric) variables, subgroups need to satisfy two criteria to truthfully represent such statements: the local distribution of the target variable must have a shifted central tendency (effect), and group members must be described well by that shift (consistency). The second requirement is captured by the group's dispersion, which determines the average error of associating group members with the central tendency value <cit.>.
Despite all three parameters—size, central tendency, and dispersion—being important, the only known approach for the efficient discovery of globally optimal subgroups, branch-and-bound search <cit.>, is restricted to objective functions that only take into account size and central tendency.
That is, if we denote by Q some subpopulation of our global population P then the objective functions f currently available to branch-and-bound can be written as
f(Q)=g(Q, Q)
where is some measure of central tendency (usually mean or median) and g is a function that is monotonically increasing in the subpopulation size Q.
A problem with all such functions is that they inherently favor larger groups with scattered target values over smaller more focused groups with the same central tendency.
That is, they favor the discovery of inconsistent statements over consistent ones—surprisingly often identifying groups with a local error that is almost as high or even higher than the global error (see Fig. <ref> for an illustration of this problem that abounded from the authors' research in Materials Science).
Although dispersion-corrected objective functions that counter-balance size by dispersion have been proposed (e.g., `t-score' by or `mmad' by ),
it remained unclear how to employ such functions outside of heuristic optimization frameworks such as greedy beam search <cit.> or selector sampling <cit.>.
Despite often finding interesting groups, such frameworks do not guarantee the detection of optimal results, which can not only be problematic for missing important discoveries but also because they therefore can never guarantee the absence of high quality groups—which often is an insight equally important as the presence of a strong pattern.
For instance, in our example in Fig. <ref>, it would be remarkable to establish that long-range interactions are to a large degree independent of nanocluster geometry.
Therefore, in this paper (Sec. <ref>), we extend branch-and-bound search to objective functions of the form
f(Q)=g(Q, Q, Q)
where g is monotonically increasing in the subpopulation size, monotonically decreasing in any dispersion measure around the median, and, besides that, depends only (but in arbitrary form) on the subpopulation median.
This involves developing an efficient algorithm for computing the tight optimistic estimator given by the optimal value of the objective function among all possible subsets of target values:
f(Q)=max{f(R) R ⊆ Q} ,
which has been shown to be a crucial ingredient for the practical applicability of branch-and-bound <cit.>.
So far, the most general approach to this problem (first codified in <cit.>; generalized here in Sec. <ref>) is to maintain a sorted list of target values throughout the search process and then to compute Eq. (<ref>) as the maximum of all subsets R_i ⊆ Q that contain all target values of Q down to target value i—an algorithm that does not generalize to objective functions depending on dispersion.
This paper presents an alternative idea (Sec. <ref>)
where we do not fix the size of subset R_i as in the previous approach but instead fix its median to target value i. It turns out that this suffices to efficiently compute the tight optimistic estimator for all objective functions of the form of Eq. (<ref>).
Moreover, we end up with a linear time algorithm (Sec. <ref>) in the important special case where the dependence on size and dispersion is determined
by the dispersion-corrected coverage defined by
Q=Q/Pmax{1-Q/P,0}
where denotes the mean absolute deviation from the median.
This is the same computational complexity as the objective function itself.
Consequently, this new approach can discover subgroups according to a more refined selection criterion without increasing the worst-case computational cost.
Additionally, as demonstrated by empirical results on a wide range of datasets (Sec. <ref>), it is also highly efficient and
successfully reduces the error of result subgroups in practice.
§ SUBGROUP DISCOVERY
Before developing the novel approach to tight optimistic estimator computation, we recall in this section the necessary basics of optimal subgroup discovery with numeric target attributes. We focus on concepts that are essential from the optimization point of view (see, e.g., and references therein for statistical considerations). As notional convention, we are using the symbol m for a positive integer m to denote the set of integers {1,…,m}.
Also, for a real-valued expression x we write (x)_+ to denote max{x,0}.
A summary of the most important notations used in this paper can be found in Appendix <ref>.
§.§ Description languages, objective functions, and closed selectors
Let P denote our given global population of entities, for each of which we know the value of a real target variable y:P →ℝ and additional descriptive information that is captured in some abstract description language of subgroup selectors σP{,}.
Each of these selectors describes a subpopulation σ⊆ P defined by
σ={p ∈ P σ(p)=}
that is referred to as the extension of σ.
Subgroup discovery is concerned with finding descriptions σ∈ that have a useful (or interesting) distribution of target values in their extension y_σ={y(p) : p ∈σ}.
This notion of usefulness is given by an objective function f. That is, the formal goal is to find elements σ∈ with maximal f(σ).
Since we assume f to be a function of the multiset of y-values, let us define f(σ)=f(σ)=f(y_σ) to be used interchangeably for convenience.
One example of a commonly used objective function is the impact measure (see ; here a scaled but order-equivalent version is given) defined by
Q=Q ( Q-P/P-P )_+
where Q=Q/P denotes the coverage or relative size of Q (here—and wherever else convenient—we identify a subpopulation Q ⊆ P with the multiset of its target values).
The standard description language in the subgroup discovery literature[In this article we remain with this basic setting for the sake of simplicity. It is, however, useful to note that several generalizations of this concept have been proposed <cit.>, to which the contributions of this paper remain applicable.] is the language consisting of logical conjunctions of a number of base propositions (or predicates).
That is, σ∈ are of the form
σ(·) ≡π_i_1(·) ∧…∧π_i_l(·)
where the π_i_j are taken from a pool of base propositions Π={π_1,…,π_k}. These propositions usually correspond to equality or inequality constraints with respect to one variable x out of a set of description variables {x_1,…,x_n} that are observed for all population members (e.g., π(p)≡ x(p) ≥ v). However, for the scope of this paper it is sufficient to simply regard them as abstract Boolean functions πP{,}.
In this paper, we focus in particular on the refined language of closed conjunctions ⊆ <cit.>, which is
defined as ={σ∈: σ=σ} by the fixpoints of the closure operation given by
σ = ⋀{π∈Π: π⊇σ} .
These are selectors to which no further proposition can be added without reducing their extension, and it can be shown that contains at most one selector for each possible extension.
While this can reduce the search space for finding optimal subgroups by several orders of magnitude, closed conjunctions are the longest (and most redundant) description for their extension and thus do not constitute intuitive descriptions by themselves.
Hence, for reporting concrete selectors (as in Fig. <ref>), closed conjunctions have to be simplified to selectors of approximately minimum length that describe the same extension <cit.>.
§.§ Branch-and-bound and optimistic estimators
bstbbBst-BB
qemptyempty
qaddalladdAll
qtoptop
sucsuc
argmaxargmax
The standard algorithmic approach for finding optimal subgroups with respect to a given objective function is branch-and-bound search—a versatile algorithmic puzzle solving framework with several forms and flavors <cit.>. At its core, all of its variants assume the availability and efficient computability of two ingredients:
* A refinement operator 2^ that is monotone, i.e., for σ, φ∈ with φ∈(σ) it holds that φ⊆σ,
and that non-redundantly generates .
That is, there is a root selector ∈ such that for every σ∈ there is a unique sequence of selectors =σ_0, σ_1, …, σ_l=σ with σ_i∈(σ_i-1). In other words, the refinement operator implicitly represents a directed tree (arborescence) on the description language rooted in .
* An optimistic estimator (or bounding function) f that bounds from above the attainable subgroup value
of a selector among all more specific selectors, i.e., it holds that f(σ) ≥ f(φ) for all φ∈ with φ⊆σ.
Based on these ingredients, a branch-and-bound algorithm simply enumerates all elements of starting from using (branch), but—based on f—avoids expanding descriptions that cannot yield an improvement over the best subgroups found so far (bound). Depending on the order in which language elements are expanded, one distinguishes between depth-first, breadth-first, breadth-first iterating deepening, and best-first search.
In the last variant, the optimistic estimator is not only used for pruning the search space, but also to select the next element to be expanded, which is particularly appealing for informed, i.e., tight, optimistic estimators.
An important feature of branch-and-bound is that it effortlessly allows to speed-up the search in a sound way by relaxing the result requirement from being f-optimal to just being an a-approximation. That is, the found solution σ satisfies for all σ' ∈ that f(σ)/f(σ') ≥ a for some approximation factor a ∈ (0,1].
The pseudo-code given in Alg. <ref> summarizes all of the above ideas.
Note that, for the sake of clarity, we omitted here some other common parameters such as a depth-limit and multiple solutions (top-k), which are straightforward to incorporate <cit.>.
An efficiently computable refinement operator has to be constructed specifically for the desired description language. For example for the language of conjunctions , one can define _cnj by
_cnj(σ)={σ∧π_i max{jπ_j ∈σ} < i ≤ k}
where we identify a conjunction with the set of base propositions it contains.
For the closed conjunctions ,
let us define the lexicographical prefix of a conjunction σ∈ and a base proposition index i ∈k as σi=σ∩{π_1, …, π_i}. Moreover, let us denote with σ the minimal index such that the i-prefix of σ is extension-preserving, i.e.,
σ=min{i σi=σ}.
With this we can construct a refinement operator <cit.> _ccj as
_ccj(σ)={φφ=σ∧π_j, σ < j ≤ k, π_j ∉σ, φj=σj} .
That is, a selector φ is among the refinements of σ if φ can be generated by an application of the closure operator given in Eq. (<ref>) that is prefix-preserving.
How to obtain an optimistic estimator for an objective function of interest depends on the definition of that objective.
For instance, the coverage function is a valid optimistic estimator for the impact function as defined in Eq. (<ref>), because the second factor of the impact function is upper bounded by 1.
In fact there are many different optimistic estimators for a given objective function.
Clearly, the smaller the value of the bounding function for a candidate subpopulation, the higher is the potential for pruning the corresponding branch from the enumeration tree.
Ideally, one would like to use f(σ)=max{f(φ) φ⊆σ}, which is the most strict function that still is a valid optimistic estimator.
In general, however, computing this function is as hard as the original subgroup optimization problem we started with.
Therefore, as a next best option, one can disregard selectability and consider the (selection-unaware) tight optimistic estimator given by
f(σ)=max{f(R) R ⊆σ} .
This leaves us with a new combinatorial optimization problem: given a subpopulation Q ⊆ P, find a sub-selection of Q that maximizes f. In the following section we will discuss strategies for solving this optimization problem efficiently for different classes of objective functions—including dispersion-corrected objectives.
§ EFFICIENTLY COMPUTABLE TIGHT OPTIMISTIC ESTIMATORS
We are going to develop an efficient algorithm for the tight optimistic estimator in three steps: First, we review and reformulate a general algorithm for the classic case of non-dispersion-aware objective functions. Then we transfer the main idea of this algorithm to the case of dispersion-corrected objectives based on the median, and finally we consider a subclass of these functions where the approach can be computed in linear time.
Throughout this section we will identify a given subpopulation Q ⊆ P with the multiset of its target values {y_1,…,y_m} and assume that the target values are indexed in ascending order, i.e., y_i ≤ y_j for i ≤ j.
Also, it is helpful to define the following partial order defined on finite multisets.
Let Y={y_1,…,y_m} and Z={z_1,…,z_m'} be two multisets such that their elements are indexed in ascending order. We say that Y is element-wise less or equal to Z and write Y Z if y_i ≤ z_i for all i ∈min{m,m'}.
§.§ The standard case: monotone functions of a central tendency measure
The most general previous approach for computing the tight optimistic estimator for subgroup discovery with a metric target variable is described by
<cit.> where it is referred to as estimation by ordering. Here, we review this approach and give a uniform and generalized version of that paper's results.
For this, we define the general notion of a measure of central tendency as follows.
We call a mapping ^ a (monotone) measure of central tendency if for all multisets Y,Z ∈^ with Y Z it holds that Y≤Z.
One can check that this definition applies to the standard measures of central tendency, i.e., the arithmetic and geometric mean as well as the median[In this paper, we are using the simple definition of the median as the 0.5-quantile (as opposed to defining it as (y_m/2+y_1+m/2)/2 for even m), which simplifies many of the definitions below and additionally is well-defined in settings where averaging of target values is undesired.] Q=y_⌈ m/2 ⌉, and also to weighted variants of them (note, however, that it does not apply to the mode).
With this we can define the class of objective functions for which the tight optimistic estimator can be computed efficiently by the standard approach as follows.
We call fP a monotone level 1 objective function if it can be written as
f(Q)=g(Q, Q)
where is some measure of central tendency and g is a function that is non-decreasing in both of its arguments. One can check that the impact measure falls under this category of functions as do many of its variants.
The central observation for computing the tight optimistic estimator for monotone level 1 functions is that the optimum value must be attained on a sub-multiset that contains a consecutive segment of elements of Q from the top element w.r.t. y down to some cut-off element. Formally, let us define the top sequence of sub-multisets of Q as
T_i={y_m-i+1,…, y_m}
for i ∈m and note the following observation:
Let f be a monotone level 1 objective function.
Then the tight optimistic estimator of f can be computed as the maximum value on the top sequence, i.e.,
f(Q)=max{f(T_i) i ∈m}.
Let R ⊆ Q be of size k with R={y_i_1,…,y_i_k}. Since y_i_j≤ y_m-j+1, we have for the top sequence element T_k that R T_k and, hence, R≤T_k implying
f(R)=g(k,R) ≤ g(k,T_l)=f(T_k) .
It follows that for each sub-multiset of Q there is a top sequence element of at least equal objective value.
From this insight it is easy to derive an m algorithm for computing the tight optimistic estimator under the additional assumption
that we can compute g and the “incremental central tendency problem” (i,Q,(T_1,…,T_i-1) ↦T_i in constant time.
Note that computing the incremental problem in constant time implies to only access a constant number of target values and of the previously computed central tendency values. This can for instance be done for = via the incremental formula T_i=((i-1) T_i-1+y_m-i+1)/i or for = through direct index access of either of the two values y_m-⌊(i-1)/2 ⌋ or y_m-⌈ (i-1)/2 ⌉.
Since, according to Prop. <ref>, we have to evaluate f only for the m candidates T_i to find f(Q) we can do so in time m by solving the problem incrementally for i=1,…,m.
The same overall approach can be readily generalized for objective functions that are monotonically decreasing in the central tendency or those that can be written as the maximum of one monotonically increasing and one monotonically decreasing level 1 function. However, it breaks down for objective functions that depend on more than just size and central tendency—which inherently is the case when we want to incorporate dispersion-control.
§.§ Dispersion-corrected objective functions based on the median
We will now extend the previous recipe for computing the tight optimistic estimator to objective functions that
depend not only on subpopulation size and central tendency but also on the target value dispersion in the subgroup.
Specifically, we focus on the median as measure of central tendency and consider functions that are both monotonically increasing in the described subpopulation size and monotonically decreasing in some dispersion measure around the median.
To precisely describe this class of functions, we first have to formalize the notion of dispersion measure around the median.
For our purpose the following definition suffices.
Let us denote by Y the multiset of absolute differences to the median of a multiset Y ∈^, i.e., Y={y_1-Y,…,y_m-Y}.
We call a mapping ^ a dispersion measure around the median if Y is monotone with respect to the multiset of absolute differences to its median Y, i.e., if YZ then Y≤Z.
One can check that this definition contains the measures median absolute deviation around the median Y=Y, the root mean of squared deviations around the median Y={x^2 x ∈Y}^1/2, as well as the mean absolute deviation around the median Y=Y.[We work here with the given definition of dispersion measure because of its simplicity. Note, however, that all subsequent arguments can be extended in a straightforward way to a wider class of dispersion measures by considering the multisets of positive and negative deviations separately. This wider class also contains the interquartile range and certain asymmetric measures, which are not covered by Def. <ref>.]
Based on Def. <ref> we can specify the class of objective functions that we aim to tackle as follows: we call a function fP a dispersion-corrected or level 2 objective function (based on the median) if it can be written as
f(Q)=g(Q,Q,Q)
where is some dispersion measure around the median and g^3 is a real function that is non-decreasing in its first argument and non-increasing in its third argument (without any monotinicity requirement for the second argument).
Our recipe for optimizing these functions is then to consider only subpopulations R ⊆ Q that can be formed by selecting all individuals with a target value in some interval. Formally, for a fixed index z ∈{1,…,m} define m_z ≤ m as the maximal cardinality of a sub-multiset of the target values that has median index z, i.e.,
m_z=min{2z,2(m-z)+1} .
Now, for k ∈m_z, let us define Q^k_z as the set with k consecutive elements around index z.
That is
Q^k_z={y_z-⌊k-1/2⌋, …, y_z, …, y_z+ ⌈k-1/2⌉} .
With this we can define the elements of the median sequence Q_z as those subsets of the form of Eq. (<ref>) that maximize f for some fixed index z ∈m.
That is, Q_z=Q^k^*_z_z where k^*_z ∈m_z is minimal with
f(Q^k^*_z_z)=g(k^*_z, y_z,Q^k^*_z_z)=max{f(Q^k_z) k ∈m_z} .
Thus, the number k^*_z is the smallest cardinality that maximizes the trade-off of size and dispersion encoded by g (given the fixed median y_z=Q^k_z for all k).
Fig. <ref> shows an exemplary median sequence based on 21 random target values.
In the following proposition we note that, as desired, searching the median sequence is sufficient for finding optimal subsets of Q.
Let f be a dispersion-corrected objective function based on the median.
Then the tight optimistic estimator of f can be computed as the maximum value on the median sequence, i.e.,
f(Q)=max{f(Q_z) z ∈m}.
For a sub-multiset R ⊆ Q let us define the gap count γ(R) as
γ(R)={y ∈ Q ∖ R min R < y < max R} .
Let O ⊆ Q be an f-maximizer with minimal gap count, i.e., f(R) < f(O) for all R with γ(R)<γ(O).
Assume that γ(O)>0. That means there is a y ∈ Q ∖ O such that min O < y < max O. Define
S=
(O ∖{min O}) ∪{y}, if y ≤O
(O ∖{max O}) ∪{y}, otherwise .
Per definition we have S=O and S=O. Additionally, we can check that SO, and, hence, S≤Q. This implies that
f(S)=g(S,S,S)≥ g(O,O,O)=f(O) .
However, per definition of S it also holds that γ(S) < γ(O), which contradicts that O is an f-optimizer with minimal gap count. Hence, any f-maximizer O must have a gap count of zero. In other words, O is of the form O=Q^k_z as in Eq. (<ref>) for some median z ∈m and some cardinality k ∈m_z and per definition we have f(Q_z) ≥ f(O) as required.
Consequently, we can compute the tight optimistic estimator for any dispersion-corrected objective function based on the median in time m^2 for subpopulations of size m—again, given a suitable incremental formula for .
While this is not generally a practical algorithm in itself, it is a useful departure point for designing one.
In the next section we show how it can be brought down to linear time when we introduce some additional constraints on the objective function.
§.§ Reaching linear time—objectives based on dispersion-corrected coverage
Equipped with the general concept of the median sequence, we can now address the special case of
dispersion-corrected objective functions where the trade-off between the subpopulation size and target value dispersion is captured by a linear function of size and the sum of absolute differences from the median. Concretely, let us define the dispersion-corrected coverage (w.r.t. absolute median deviation) by
Q=Q/P(1-Q/P)_+ = ( Q/P-Q/P)_+
where Q=∑_y ∈ Qy-Q denotes the sum of absolute deviations from the median. We then consider objective functions based on the dispersion-corrected coverage of the form
f(Q)=g(Q,Q)
where g is non-decreasing in its first argument. Let us note, however, that we could replace the function by any linear function that depends positively on Q and negatively on . It is easy to verify that function of this form also obey the more general definition of level-2 objective functions given in Sec. <ref>, and, hence can be optimized via the median sequence.
The key to computing the tight optimistic estimator f in linear time for functions based on dispersion-corrected coverage is then that the members of the median sequence Q_z can be computed incrementally in constant time. Indeed, we can prove the following theorem, which states that the optimal size for a multiset around median index z is within 3 of the optimal size for a multiset around median index z+1—a fact that can also be observed in the example given in Fig. <ref>.
Let f be of the form of Eq. (<ref>). For z ∈m-1 it holds for the size k_z^* of the f-optimal multiset with median z that
k_z^* ∈{max(0,k^*_z+1-3),…,min(m_z,k^*_z+1+3)} .
One idea to prove this theorem is to show that a) the gain in f for increasing the multiset around a median index z is alternating between two discrete concave functions and b) that the gains for growing multisets between two consecutive median indices are bounding each other.
For an intuitive understanding of this argument, Fig. <ref> shows for four different median indices z ∈{10,11,12,13} the dispersion-corrected coverage for the sets Q^k_z as a function in k. On closer inspection, we can observe that when considering only every second segment of each function graph, the corresponding -values have a concave shape.
A detailed proof, which is rather long and partially technical, can be found in Appendix <ref>.
It follows that, after computing the objective value of Q_m trivially as f(Q_m)=g(1/P,y_m), we can obtain f(Q_z-1) for z=m,…,2 by checking the at most seven candidate set sizes given by Eq. (<ref>) as
f(Q^z-1)=max{f(Q^k^-_z_z-1),…, f(Q^k_z^+_z-1)}
with k_z^-=max(k_z^*-3,1) and k_z^+=min(k_z^*+3,m_z).
It remains to see that we can compute individual evaluations of f in constant time (after some initial m pre-processing step). As a general data structure for quickly computing sums of absolute deviations from a center point, we can define for i ∈m the left error e_l(i) and the right error e_r(i) as
e_l(i)=∑_j=1^i-1 y_i-y_j, e_r(i)=∑_j=i+1^m y_j-y_i .
Note that we can compute these error terms for all i ∈m in time m via the recursions
e_l(i) =e_l(i-1)+(i-1)(y_i-y_i-1)
e_r(i) =e_r(i+1)+(m-i)(y_i+1-y_i)
and e_l(1)=e_r(m)=0. Subsequently, we can compute sums of deviations from center points of arbitrary subpopulations in constant time, as the following statement shows (see Appendix <ref> for a proof).
Let Q={y_1,…,y_a, …, y_z, …, y_b,…, y_m} be a multiset with 1 ≤ a<z<b ≤ m and y_i≤ y_j for i≤ j. Then the sum of absolute deviations to y_i of all elements of the submultiset {y_a, …, y_z, …, y_b} can be expressed as
∑_i=a^b y_z-y_i =e_l(z)-e_l(a)-(a-1)(y_z-y_a)
+e_r(z)-e_r(b)-(m-b)(y_b-y_z) .
With this we can compute k ↦ f(Q_z^k) in constant time (assuming g can be computed in constant time). Together with Prop. <ref> and Thm. <ref> this results in a linear time algorithm for computing Q ↦f(Q) (see Alg. <ref> for a pseudo-code that summarizes all ideas).
§ DISPERSION-CORRECTED SUBGROUP DISCOVERY IN PRACTICE
[ph!]
!
7cDataset 6cSelection Bias 5cEfficiency
(lr)1-7 (lr)8-13 (lr)14-18
Name Target P Π P P a_eff t_0 t_1
1 abalone rings 4,177 69 9 2.359 0.544 0.191 11 11 2.257 1.662 1 848,258 690,177 304 339
2 ailerons goal 13,750 357 -0.0008 0.000303 0.906 0.59 -0.0007 -0.0006 0.000288 0.000198 0.3 1,069,456 54,103 6,542 460
3 autoMPG8 mpg 392 24 22.5 6.524 0.497 0.497 29 29 4.791 4.791 1 96 67 0.11 0.09
4 baseball salary 337 24 740 954.386 0.362 0.003 1550 2500 1245.092 0 1 117 117 0.22 0.21
5 california med. h. value 20,640 72 179,700 88,354 0.385 0.019 262,500 500,001 94261 294,00 0.4 1,368,662 65,707 2,676 368
6 compactiv usr 8,192 202 89 9.661 0.464 0.603 94 93 7.8 3.472 0.5 2,458,105 59,053 5,161 208
7 concrete compr. strength 1,030 70 34.4 13.427 0.284 0.1291 48.97 50.7 12.744 9.512 1 512,195 221,322 43.9 35.8
8 dee consume 365 60 2.787 0.831 0.523 0.381 3.815 4.008 0.721 0.434 1 18,663 2,653 2.05 1.29
9 delta_ail sa 7,129 66 -0.0001 0.000231 0.902 0.392 0.0001 0.0002 0.000226 0.000119 1 45,194 2,632 33.3 6.11
10 delta_elv se 9,517 66 0.001 0.00198 0.384 0.369 0.002 0.002 0.00112 0.00108 1 10145 1,415 8.9 4.01
11 elevators goal 16,599 155 0.02 0.00411 0.113 0.283 0.03 0.021 0.00813 0.00373 0.05 6,356,465 526,114 13,712 2,891
12 forestfires area 517 70 0.52 12.832 0.01 0.002 86.45 278.53 56.027 0 1 340,426 264,207 23 23.7
13 friedman output 1,200 48 14.651 4.234 0.387 0.294 18.934 19.727 3.065 2.73 1 19,209 2,489 3.23 1.56
14 house price 22,784 160 33,200 28,456 0.56 0.723 45,200 34,000 40,576 27,214 0.002 1,221,696 114,566 7,937 1,308
15 laser output 993 42 46 35.561 0.32 0.093 109 135 40.313 15.662 1 2,008 815 0.96 0.83
16 mortgage 30 y. rate 1,049 128 6.71 2.373 0.256 0.097 11.61 14.41 2.081 0.98 1 40,753 1,270 11.6 1.59
17 mv y 40,768 79 -5.02086 8.509 0.497 0.349 0.076 0.193 8.541 2.032 1 6,513 1,017 31.9 13.2
18 pole output 14,998 260 0 28.949 0.40 0.24 100 100 38.995 16.692 0.2 1,041,146 2,966 2,638 15
19 puma32h thetadd6 8,192 318 0.000261 0.023 0.299 0.244 0.026 0.031 0.018 0.017 0.4 3,141,046 5,782 2,648 15.5
20 stock company10 950 80 46.625 5.47 0.471 0.337 52.5 54.375 3.741 2.515 1 85,692 1,822 12.5 1.56
21 treasury 1 m. def. rate 1,049 128 6.61 2.473 0.182 0.339 13.16 8.65 2.591 0.863 1 49,197 9,247 14.8 5.91
22 wankara mean temp. 321 87 47.7 12.753 0.545 0.296 60.6 67.6 8.873 4.752 1 191,053 4,081 11.9 1.24
23 wizmir mean temp. 1,461 82 60 12.622 0.6 0.349 72.9 78.5 8.527 3.889 1 177,768 1,409 38.5 1.48
24 binaries delta E 82 499 0.106 0.277 0.305 0.378 0.43 0.202 0.373 0.118 0.5 4,712,128 204 1,200 0.29
25 gold Evdw-Evdw0 12,200 250 0.131 0.088 0.765 0.34 0.217 0.234 0.081 0.0278 0.4 1,498,185 451 5,650 3.96
Datasets with corresponding population size (P), number of base propositions (Π), global median (P) and mean absolute median deviation (P) followed by coverage (, ), median (, ), and mean absolute median deviation (, ) for best subgroup w.r.t. non-dispersion corrected function f_0 and dispersion-corrected function f_1, respectively; bold-face indicates higher coverage and median and lower dispersion, underlines indicate higher dispersion than in global population; final column segment contains accuracy parameter used in the efficiency study (a_eff) as well as number of expanded nodes (, ) and computation time in seconds (t_0, t_1) for optimistic estimator based on top sequence f_0 and tight optimistic estimator f_1, respectively—in both cases when optimizing f_1; depth-limit of 10 is used for all datasets with a<1, no depth-limit otherwise.
The overall result of Sec. <ref> is an efficient algorithm for dispersion-corrected subgroup discovery which, e.g., allows us to replace the coverage term in standard objective functions by the dispersion-corrected coverage.
To evaluate this efficiency claim as well as the value of dispersion-correction, let us consider as objective the normalized and dispersion-corrected impact function based on the median, i.e., f_1(Q)=QQ where is the positive relative median shift
Q= ( Q-P/P-P )_+ .
This function obeys Eq. (<ref>); thus, its tight optimistic estimator can be computed using the linear time algorithm from Sec. <ref>.
The following empirical results were gathered by applying it to a range of publicly available real-world datasets.[Datasets contain all regression datasets from the KEEL repository <cit.> with at least 5 attributes and two materials datasets from the Nomad Repository <nomad-coe.eu/>; see Tab. <ref>. Implementation available in open source Java library realKD <bitbucket.org/realKD/>. Computation times determined on MacBook Pro 3.1 GHz Intel Core i7.]
We will first investigate the effect of dispersion-correction on the output before turning to the effect of the tight optimistic estimator on the computation time.
§.§ Selection Bias of Dispersion-Correction and its Statistical Merit
To investigate the selection bias of f_1 let us also consider the non-dispersion corrected variant f_0(Q)=QQ where we simply replace the dispersion-corrected coverage by the ordinary coverage. This function is a monotone level 1 function, hence, its tight optimistic estimator f_0 can be computed in linear time using the top sequence approach.
Fig. <ref> shows the characteristics of the optimal subgroups that are discovered with respect to both of these objective functions (see also Tab. <ref> for exact values) where for all datasets the language of closed conjunctions has been used as description language.
The first observation is that—as enforced by design—for all datasets the mean absolute deviation from the median is lower for the dispersion-corrected variant (except in one case where both functions yield the same subgroup). On average the dispersion for f_1 is 49 percent of the global dispersion, whereas it is 113 percent for f_0, i.e., when not optimizing the dispersion it is on average higher in the subgroups than in the global population.
When it comes to the other subgroup characteristics, coverage and median target value, the global picture is that f_1 discovers somewhat more specific groups (mean coverage 0.3 versus 0.44 for f_0) with higher median shift (on average 0.73 normalized median deviations higher).
However, in contrast to dispersion, the behavior for median shift and coverage varies across the datasets.
In Fig. <ref>, the datasets are ordered according to the difference in subgroup medians between the optimal subgroups w.r.t. f_0 and those w.r.t. f_1.
This ordering reveals the following categorization of outcomes: When our description language is not able to reduce the error of subgroups with very high median value, f_1 settles for more coherent groups with a less extreme but still outstanding central tendency.
On the other end of the scale, when no coherent groups with moderate size and median shift can be identified, the dispersion-corrected objective selects very small groups with the most extreme target values.
The majority of datasets obey the global trend of dispersion-correction leading to somewhat more specific subgroups with higher median that are, as intended, more coherent.
To determine based on these empirical observations, whether we should generally favor dispersion correction, we have to specify an application context that specifies the relative importance of coverage, central tendency, and dispersion. For that let us consider the common statistical setting in which we do not observe the full global population P but instead subgroup discovery is performed only on an i.i.d. sample P' ⊆ P yielding subpopulations Q'=σ(P'). While σ has been optimized w.r.t. the statistics on that sample Q' we are actually interested in the properties of the full subpopulation Q = σ(P).
For instance, a natural question is what is the minimal y-value that we expect to see in a random individual q ∈ Q with high confidence.
That is, we prefer subgroups with an as high as possible threshold such that a random q ∈ Q satisfies with probability[The probability is w.r.t. to the distribution with which the sample P' ⊆ P is drawn.] 1-δ that y(q) ≥.
This criterion gives rise to a natural trade-off between the three evaluation metrics through the empirical Chebycheff inequality <cit.>, according to which we can compute such a value as Q'-ϵ(Q') where
ϵ(Q') = √((Q'^2-1)(Q')/Q'^2δ - Q')
and Y=∑_y ∈ Y (y-Y)^2 / (Y-1) is the sample variance. Note that this expression is only defined for sample subpopulations with a size of at least 1/δ. For smaller subgroups our best guess for a threshold value would be the one derived from the global sample P'-ϵ(P') (which we assume to be large enough to determine an ϵ-value). This gives rise to the following standardized lower confidence bound score that evaluates how much a subgroup improves over the global value:
Q'=( Q'-P'/√((P')) )_+ whereQ'=Q'-ϵ(Q') , if ϵ(Q') defined
P'-ϵ(P') , otherwise .
The plot on the left side of Fig. <ref> shows the score values of the optimal subgroup w.r.t. to f_1 (_1) and f_0 (_0) using confidence parameter δ=0.05. Except for three exceptions (datasets 3,4, and 12), the subgroup resulting from f_1 provides a higher lower bound than those from the non-dispersion corrected variant f_0. That is, the data shows a strong advantage for dispersion correction when we are interested in selectors that mostly select individuals with a high target value from the underlying population P.
In order to test the significance of these results, we can employ the Bayesian sign-test <cit.>, a modern alternative to classic frequentist null hypothesis tests that avoids many of the well-known disadvantages of those <cit.>. With Bayesian hypothesis tests, we can directly evaluate the posterior probabilities of hypotheses given our experimental data instead of just rejecting a null hypothesis based on some arbitrary significance level. Moreover, we differentiate between sample size and effect size by the introduction of a region of practical equivalence (rope). Here, we are interested in the relative difference z̃=(_1 - _0)/(max{_0, _1}) on average for random subgroup discovery problems.
Using a conservative choice for the rope, we call the two objective functions practically equivalent if the mean z̃-value is at most r=0.1. Choosing the prior belief that f_0 is superior, i.e., z̃ < -r, with a prior weight of 1, the procedure yields based on our 25 test datasets the posterior probability of approximately 1 that z̃ > r on average (see the right part of Fig. <ref> for in illustration of the posterior belief). Hence, we can conclude that dispersion-correction improves the relative lower confidence bound of target values on average by more than 10 percent when compared to the non-dispersion-corrected function.
§.§ Efficiency of the Tight Optimistic Estimator
To study the effect of the tight optimistic estimator, let us compare its performance to that of a baseline estimator that can be computed with the standard top sequence approach. Since f_1 is upper bounded by f_0, f_0 is a valid, albeit non-tight, optimistic estimator for f_1 and can thus be used for this purpose.
The exact speed-up factor is determined by the ratio of enumerated nodes for both variants as well as the ratio of computation times for an individual optimistic estimator computation.
While both factors determine the practically relevant outcome, the number of nodes evaluated is a much more stable quantity, which indicates the full underlying speed-up potential independent of implementation details.
Similarly, “number of nodes evaluated” is also an insightful unit of time for measuring optimization progress.
Therefore, in addition to the computation time in seconds t_0 and t_1, let us denote by , ⊆ the set of nodes enumerated by branch-and-bound using f_0 and f_1, respectively—but in both cases for optimizing the dispersion-corrected objective f_1.
Moreover, when running branch-and-bound with optimistic estimator f_i, let us denote by σ^*_i(n) and σ^+_i(n) the best selector found and the top element of the priority queue (w.r.t. f_i), respectively, after n nodes have been enumerated.
The plot on the left side of Fig. <ref> shows the speed-up factor t_1/t_0 on a logarithmic axis for all datasets in increasing order along with the potential speed-up factors / (see Tab. <ref> for numerical values). There are seven datasets for which the speed-up turns out to be minor followed by four datasets with a modest speed-up factor of 2. For the remaining 14 datasets, however, we have solid speed-up factors between 4 and 20 and in four cases immense values between 100 and 4,000.
This demonstrates the decisive potential effect of tight value estimation even when compared to another non-trivial estimator like f_0 (which itself improves over simpler options by orders of magnitude; see ).
Similar to the results in Sec. <ref>, the Bayesian sign-test for the normalized difference z=(t_1-t_0)/max{t_1,t_0} with the prior set to practical equivalence (z ∈ [-0.1,0.1]) reveals that the posterior probability of f_1 being superior to f_0 is approximately 1.
In almost all cases the potential speed-up given by the ratio of enumerated nodes is considerably higher than the actual speed-up, which shows
that, despite the same asymptotic time complexity, an individual computation of the tight optimistic estimator is slower than the simpler top sequence based estimator—but also indicates that there is room for improvements in the implementation.
When zooming in on the optimization progress over time for the binaries dataset, which exhibits the most extreme speed-up (right plot in Fig. <ref>),
we can see that not only does the tight optimistic estimator close the gap between best current selector and current highest potential selector much faster—thus creating the huge speed-up factor—but also that it causes better solutions to be found earlier.
This is an important property when we want to use the algorithm as an anytime algorithm, i.e., when allowing the user to terminate computation preemptively, which is important in interactive data analysis systems.
This is an advantage enabled specifically by using the tight optimistic estimators in conjunction with the best-first node expansion strategy.
§ CONCLUSION
During the preceding sections, we developed and evaluated an effective algorithm for simultaneously optimizing size, central tendency, and dispersion in subgroup discovery with a numerical target.
This algorithm is based on two central results: 1) the tight optimistic estimator for any objective function that is based on some dispersion measure around the median can be computed as the function's maximum on a linear-sized sequence of sets—the median sequence (Prop. <ref>); and 2) for objective functions based on the concept of the dispersion-corrected coverage w.r.t. the absolute deviation from the median, the individual sets of the median sequence can be generated in incremental constant time (Thm. <ref>).
Among the possible applications of the proposed approach, the perhaps most important one is to replace the standard coverage term in classic objective functions by the dispersion-corrected coverage, i.e., the relative subgroup size minus the relative subgroup dispersion, to reduce the error of result subgroups—where error refers to the descriptive or predictive inaccuracy incurred when assuming the median value of a subgroup for all its members.
As we saw empirically for the impact function (based on the median), this correction also has a statistical advantage resulting in subgroups where we can assume greater target values for unseen group members with high confidence.
In addition to enabling dispersion-correction to known objective functions, the presented algorithm also provides novel degrees of freedom, which might be interesting to exploit in their own right:
The dependence on the median is not required to be monotone, which allows to incorporate a more sophisticated influence of the central tendency value than simple monotone average shifts. For instance, given a suitable statistical model for the global distribution, the effect of the median could be a function of the probability ℙ[Q], e.g., its Shannon information content.
Furthermore, the feasible dispersion measures allow for interesting weighting schemes, which include possibilities of asymmetric effects of the error (e.g., for only punishing one-sided deviation from the median).
Regarding the limitations of the presented approach, let us note that it cannot be directly applied to the previously proposed dispersion-aware functions, i.e., the t-score 𝚝𝚜𝚌(Q)=√(Q)(Q-P)/Q and the mmad score for ranked data 𝚖𝚖𝚍(Q)=Q/(2Q+Q).
While both of these functions can be optimized via the median sequence approach (assuming a t-score variant based on the median), we are lacking an efficient incremental formula for computing the individual function values for all median sequence sets, i.e., a replacement for Thm. <ref>.
Though finding such a replacement in future research is conceivable,
this leaves us for the moment with a quadratic time algorithm (in the subgroup size) for the tight optimistic estimator, which is not generally feasible (although potentially useful for smaller datasets or as part of a hybrid optimistic estimator, which uses the approach for sufficiently small subgroups only).
Since they share basic monotonicities, it is possible to use functions based on dispersion-corrected coverage as an optimization proxy for the above mentioned objectives. For instance, the ranking of the top 20 subgroups w.r.t. the dispersion-corrected binomial quality function, 𝚍𝚌𝚋(Q)=√(Q)(Q-P), turns out to have a mean Spearman rank correlation coefficient with the median-based t-score of apx. 0.783 on five randomly selected test datasets (delta_elv, laser, stock, treasury, gold). However, a more systematic understanding of the differences and commonalities of these functions is necessary to reliably replace them with one another. Moreover, the correlation deteriorates quite sharply when we compare to the original mean/variance based t-score (mean Spearman correlation coefficient 0.567), which points to the perhaps more fundamental limitation of the presented approach for dispersion-correction: it relies on using the median as measure of central tendency.
While the median and the mean absolute deviation from the median are an interpretable, robust, and sound combination of measures (the median of a set of values minimizes the sum of absolute deviations), the mean and the variance are just as sound, are potentially more relevant when sensitivity to outliers is required, and provide a wealth of statistical tools (e.g., the empirical Chebyshev's inequality used above).
Hence, a straightforward but valuable direction for future work is the extension of efficient tight optimistic estimator computation to dispersion-correction based on the mean and variance. A basic observation for this task is that objective functions based on dispersion measures around the mean must also attain their maximum on gap-free intervals of target values. However, for a given collection of target values, there is a quadratic number of intervals such that a further idea is required in order to attain an efficient, i.e., (log-)linear time algorithm.
Another valuable direction for future research is the extension of consistency and error optimization to the case of multidimensional target variables where subgroup parameters can represent complex statistical models <cit.>. While this setting is algorithmically more challenging than the univariate case covered here, the underlying motivation remains: balancing group size and exceptionality, i.e., distance of local to global model parameters, with consistency, i.e., local model fit, should lead to the discovery of more meaningful statements about the data and the underlying domain.
Acknowledgements
The authors thank the anonymous reviewers for their useful and constructive suggestions.
Jilles Vreeken and Mario Boley are supported by the Cluster of Excellence “Multimodal Computing and Interaction” within the Excellence Initiative of the German Federal Government.
Bryan R. Goldsmith acknowledges support from the Alexander von Humboldt-Foundation with a Postdoctoral Fellowship.
Additionally, this work was supported through the European Union's Horizon 2020 research and innovation program under
grant agreement no. 676580 with The Novel Materials Discovery (NOMAD) Laboratory, a European Center of
Excellence.
spbasic
§ PROOF OF THEOREM <REF>
In order to proof Thm. <ref>, let us start by noting that for functions of the form of Eq. (<ref>), finding the set size k^*_z corresponds to maximizing the dispersion-corrected coverage among all multisets with consecutive elements around median y_z (as defined in Eq. <ref>).
In order to analyze this problem, let us write
h_z(k)=Q^k_z=Q^k_z/P - Q^k_z/P
for the dispersion-corrected coverage of the multiset Q^k_z.
Let Δ h_zm_z denote the difference or gain function of h_z, i.e., Δ h_z(k)=h_z(k)-h_z(k-1) where we consider Q^0_z=∅ and, hence, h_z(0)=0. With this definition we can show that h_z is alternating between two concave functions, i.e., considering either only the even or only the odd subset of its domain, the gains are monotonically decreasing. More precisely:
For all k ∈m_z∖{1,2} we have that Δ h_z(k) ≤Δ h_z(k-2).
For k ∈m_z, let us denote by q^k_z the additional y-value that Q_z^k contains compared to Q_z^k-1 (considering Q_z^0=∅), i.e., Q_z^k ∖ Q_z^k-1={q_z^k}. We can check that
q_z^k=
q_z-⌊k-1/2⌋, k odd
q_z+⌈k-1/2⌉, k even .
With this and using the shorthands n=P and d=P we can write
Δ h_z(k) - Δ h_z(k-2)
= h_z(k)-h_z(k-1)-(h_z(k-2)-h_z(k-3))
= k/n-Q^k_z/d-k-1/n+Q^k-1_z/d-k-2/n+Q^k-2_z/d+k-3/n-Q^k-3_z/d
= 1/n (k-k+1-k+2+k-3 )_=0+1/d (Q_z^k-2- Q^k_z+Q_z^k-1-Q_z^k-3 )
= 1/d( -q_z^k-y_z - q^k-1_z-y_z + q_z^k-1-y_z + q_z^k-2-y_z)
= 1/d( -q_z^k-y_z + q_z^k-2-y_z)
case k odd
= 1/d( -(y_z-y_z-⌊k-1/2⌋) + (y_z-y_z-⌊k-3/2⌋) ) = y_z-⌊k-1/2⌋ - y_z-⌊k-3/2⌋≤ 0
case k even
= 1/d( -(y_z+⌈k-1/2⌉-y_z) + (y_z+⌈k-3/2⌉-y_z) ) = y_z+⌈k-3/2⌉ - y_z+⌈k-1/2⌉≤ 0
One important consequence of this fact is that the operation of growing a set around median z by two elements—one to the left and one to the right—has monotonically decreasing gains.
In other words, the smoothed function h_z(k)=h_z(k)+h_z(k-1) is concave or formally
Δ h_z(k) + Δ h_z(k-1) ≥Δ h_z(k+1)+Δ h_z(k) .
Moreover, we can relate the gain functions of consecutive median indices as follows.
Let z ∈m∖{1} and k ∈m_z-1∖{1,2,3}. It holds that
Δ h_z-1(k-2)+Δ h_z-1(k-3) ≥Δ h_z(k)+Δ h_z(k-1)
Δ h_z-1(k)+Δ h_z-1(k-1) ≤Δ h_z(k-2)+Δ h_z(k-3)
For this proof, let us use the same shorthands as in the proof of Lemma <ref> and start by noting that for all i ∈m and k ∈m_z∖{1} we have the equality
Δ h_i(k)+Δ h_i(k-1) = 2/n-q_i^k-y_i+q_i^k-1-y_i/d
which we can see by extending
Δ h_i(k)+Δ h_i(k-1) =h_i(k)-h_i(k-1)+h_i(k-1)-h_i(k-2)
=k-k+2/n-Q^k_i-Q^k-2_i/d=2/n-q^k_i-y_i+q^k-1_i-y_i/d .
We can then show Eq. (<ref>) by applying Eq. (<ref>) two times to
Δ h_z-1(k-2)+Δ h_z-1(k-3) - (Δ h_z(k)+Δ h_z(k-1))
= 1/d ( -q^k-2_z-1-y_z-1 - q^k-3_z-1-y_z-1 + q^k_z-y_z + q^k-1_z-y_z )
and finally by checking separately the case k odd
= 1/d ( y_z-1-k-3/2-y_z-1+y_z-1 - y_z-1+k-4/2 + y_z - y_z-k-1/2 + y_z+k-2/2⌉-y_z )
= 1/d ( y_z-k-1/2 - y_z-k-1/2_=0 + y_z-1+k/2 - y_z-3+k/2_≥ 0 ) ≥ 0
and the case k even
= 1/d(y_z-1- y_z-1+k-3/2 + y_z-1-k-4/2-y_z-1 + y_z+k-1/2 - y_z + y_z - y_z - k-2/2)
= 1/d(y_z+1-k/2 - y_z +1 - k/2_=0 + y_z+k-1/2 - y_z-2+k-1/2_≥ 0) ≥ 0 .
Similarly, for Eq. (<ref>) by applying Eq. (<ref>) two times we can write
Δ h_z-1(k)+Δ h_z-1(k-1) - (Δ h_z(k-2) + Δ h_z(k-3))
= 1/d( -q_z-1^k-y_z-1 - q_z-1^k-1 - y_z-1 + q_z^k-2-y_z + q_z^k-3-y_z)
= 1/d( y_z-1-k-1/2 - y_z+1-k-1/2_≤ 0 + y_z-2+k/2-y_z-2+k/2_=0) ≤ 0, k odd
1/d( y_z-k/2 - y_z+2-k/2_≤ 0 + y_z-1+k-1/2 - y_z-1+k-1/2_=0) ≤ 0, k even
Combining all of the above we can finally proof our main result as follows.
We start by showing that every k ∈m_z+1 with k<k^*_z-3 can not be an optimizer of h_z+1. It follows that k^*_z-3 ≤ k^*_z+1, and, hence, k^*_z≤ k^*_z+1+3 as required for the upper bound. Indeed, we have
h_z+1(k) = h_z+1(k+2) - (Δ h_z+1(k+2)+Δ h_z+1(k+1))
≤ h_z+1(k+2) - (Δ h_z+1(k^*_z-2) + Δ h_z+1(k^*_z-3)) (by Eq. (<ref>))
≤ h_z+1(k+2) - (Δ h_z(k^*_z) + Δ h_z(k^*_z-1))_>0 by def. of k^*_z <h_z+1(k+2) . (by Lm. <ref>)
Analogously, for the lower bound, we show that every k ∈m_z+1 with k>k^*_z+3 can not be the smallest optimizer of h_z+1. It follows that k^*_z+3 ≥ k^*_z+1, and, hence, k^*_z≥ k^*_z+1-3 as required. Indeed, we can write
h_z+1(k) = h_z+1(k-2)+Δ h_z+1(k)+Δ h_k+1(k-1)
≤ h_z+1(k-2) + Δ h_z+1(k^*_z+4) + Δ h_z+1(k^*_z+3) (by Eq. (<ref>))
≤ h_z+1(k-2) + Δ h_z(k^*_z+2) + Δ h_z(k^*_z+1)_≤ 0 by def. of k^*_z≤ h_z+1(k-2) (by Lm. <ref>)
§ ADDITIONAL PROOFS
Using d_ij as a shorthand for y_j-y_i for i,j ∈m with i≤ j we can write
e_l(z)-e_l(a)-(a-1)(y_z-y_a)+e_r(z)-e_r(b)-(m-b)d_zb
= ∑^z-1_i=1d_iz-∑^a-1_i=1d_ia-(a-1)d_az+∑_i=z+1^m d_zi-∑_i=b+1^md_bi-(m-b)d_zb
= ∑^z-1_i=ad_iz+∑^a-1_i=1(d_iz-d_ia)_d_az-(a-1)d_az+∑^b_i=z+1d_zi+∑^m_i=b+1(d_zi-d_bi)_d_zb-(m-b)d_zb
= ∑^z-1_i=ad_iz+(a-1)d_az-(a-1)d_az+∑^b_i=z+1d_zi+(m-b)d_zb-(m-b)d_zb
= ∑^z-1_i=ad_iz + ∑^b_i=z+1d_zi=∑_i=a^b y_z-y_i
§ SUMMARY OF USED NOTATIONS
Symbol Meaning Defined in
· cardinality of a set or absolute value of a number -
k set of integers {1,…,k} <ref>
(x)_+ max{x,0} for a real-valued expression x <ref>
element-wise less-or-equal relation for multisets of real values <ref>
X power set of a set X, i.e., set of all of its subsets -
^X set of all multisets containing elements from set X -
σ, φ subgroup selectors σ, φP{,} <ref>
measure of central tendency <ref>
measure of dispersion <ref>
e_l(i), e_r(i) left and right cumulative errors of target values up to value i <ref>
f objective function <ref>
f tight optimistic estimator of objective function f <ref>
m number of elements in subpopulation Q <ref>
m_z maximal size parameter k for consecutive value set Q_z^k <ref>
k^*_z f-maximizing size parameter k for consecutive value set Q_z^k <ref>
y numeric target attribute yP <ref>
y_i i-th target value of subpopulation w.r.t. ascending order <ref>
P global population of given subgroup discovery problem <ref>
Q some subpopulation Q ⊆ P <ref>
Q_z median sequence element with median index z <ref>
Q^k_z submultiset of Q with k consecutive elements around index z <ref>
T_i top sequence element i, i.e., T_i={y_m-i+1,…, y_m} <ref>
Y real-valued multiset <ref>
Y multiset of differences of elements in Y to its median <ref>
, description language and language of conjunctions <ref>
language of closed conjunctions <ref>
Q mean absolute deviation of y-values in Q to their median <ref>
Q coverage, i.e., relative size Q/P of subpopulation Q <ref>
Q dispersion-corrected coverage of subpopulation Q <ref>
Q arithmetic mean of y-values in Q -
Q impact, i.e., weighted mean-shift, of subpopulation Q <ref>
Q median of y-values in Q <ref>
Q sum of absolute deviations of y-values in Q to their median <ref>
|
http://arxiv.org/abs/1701.07669v1 | 20170126121554 | Cyclotron resonant scattering feature simulations. II. Description of the CRSF simulation process | [
"F. -W. Schwarm",
"R. Ballhausen",
"S. Falkner",
"G. Schönherr",
"K. Pottschmidt",
"M. T. Wolff",
"P. A. Becker",
"F. Fürst",
"D. M. Marcu-Cheatham",
"P. B. Hemphill",
"E. Sokolova-Lapa",
"T. Dauser",
"D. Klochkov",
"C. Ferrigno",
"J. Wilms"
] | astro-ph.HE | [
"astro-ph.HE"
] |
II. Description of the CRSF simulation process
F.-W. Schwarm et al.
Dr. Karl Remeis-Sternwarte and Erlangen Centre for Astroparticle Physics, Sternwartstrasse 7, 96049 Bamberg, Germany
Leibniz-Institut für Astrophysik Potsdam (AIP), An der Sternwarte 16, 14482 Potsdam, Germany
CRESST, Department of Physics, and Center for Space Science and Technology, UMBC, Baltimore, MD 21250, USA
NASA Goddard Space Flight Center, Code 661, Greenbelt, MD 20771, USA
Space Science Division, Naval Research Laboratory, Washington, DC 20375-5352, USA
Department of Physics & Astronomy, George Mason University, Fairfax, VA 22030-4444, USA
European Space Astronomy Centre (ESA/ESAC), Science Operations Department, Villanueva de la Cañada (Madrid), Spain
Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
Faculty of Physics, M. V. Lomonosov Moscow State University, Leninskie Gory, Moscow 119991, Russia
Sternberg Astronomical Institute, Moscow M. V. Lomonosov State University, Universitetskij pr., 13, Moscow 119992, Russia
Institut für Astronomie und Astrophysik, Universität Tübingen (IAAT), Sand 1, 72076 Tübingen, Germany
ISDC Data Center for Astrophysics, Université de Genève, chemin d’Écogia 16, 1290 Versoix, Switzerland
Cyclotron resonant scattering features (CRSFs) are formed by scattering
of X-ray photons off quantized plasma electrons in the strong magnetic field
(of the order 10^12 G) close to the surface of
an accreting X-ray pulsar. Due to the complex scattering cross sections the line
profiles of CRSFs cannot be described by an analytic expression. Numerical
methods such as Monte Carlo (MC) simulations of the scattering processes
are required in order to predict precise line shapes for
a given physical setup, which can be compared to observations to gain information about the
underlying physics in these systems.
A versatile simulation code is needed for the generation of synthetic
cyclotron lines. Sophisticated geometries
should be investigatable by making their simulation
possible for the first time.
The simulation utilizes the mean free path tables described in
the first paper of this series for the fast interpolation of propagation
lengths. The code is parallelized to make the very time consuming simulations
possible on convenient time scales. Furthermore, it can generate responses to
mono-energetic photon injections, producing Green's functions,
which can be used later to generate spectra for arbitrary continua.
We develop a new simulation code to generate synthetic cyclotron
lines
for
complex scenarios, allowing for unprecedented
physical interpretation of the observed data. An associated XSPEC
model implementation is used to fit synthetic line profiles
to NuSTAR data of Cep X-4.
The code has
been developed with the main goal of overcoming previous geometrical
constraints in MC simulations of CRSFs. By applying this code also to
more simple, classic geometries used in previous works, we furthermore address
issues of code verification and cross-comparison of various models. The
XSPEC model and the Green's function tables are available online
(see link in footnote, page 1).
Cyclotron resonant scattering feature simulations
F.-W. Schwarm1,
R. Ballhausen1,
S. Falkner1,
G. Schönherr2,
K. Pottschmidt3,4,
M. T. Wolff5,
P. A. Becker6,
F. Fürst7,
D. M. Marcu-Cheatham3,4,
P. B. Hemphill8,
E. Sokolova-Lapa9,10,
T. Dauser1,
D. Klochkov11,
C. Ferrigno12,
J. Wilms1
Received December 14, 2016; accepted January 23, 2017
=============================================================================================================================================================================================================================================================
§ INTRODUCTION
<http://www.sternwarte.uni-erlangen.de/research/cyclo>
Cyclotron resonant scattering features (CRSFs, or “cyclotron lines”)
result from the interaction of photons with electrons in a strong
magnetic field (B≳ 10^12 G), for example, in the accretion
column of the magnetized neutron star in accreting X-ray binaries.
Their complex shape <cit.> reveals information about
the environment of the line forming region, which is typically close
to the neutron star.
A physical model for the line formation is needed in order to translate
observed line characteristics — like the line's depth, width, and shape —
into physically meaningful parameters.
This paper is the second in a series in which we describe calculations of
resonant cyclotron scattering for various configurations of the
accreting matter. In the first paper of the series <cit.>, we described the calculation of the
relevant cross section for the range of conditions typically expected
for accreting neutron stars in X-ray binary systems. In this work we
discuss the application of these cross sections to the simulation of synthetic cyclotron lines.
Simulations of CRSFs have been performed by various groups in the
past. They can be divided into three classes: Monte Carlo
<cit.>,
Feautrier <cit.>,
and semianalytic methods <cit.>. <cit.> provide an
overview over the methods used in previous works.
<cit.> discuss the features of the fundamental approaches and
use multiple methods for the generation
of synthetic cyclotron lines in various parameter regimes.
For optical depths on the order of
10^-4–10^-3τ_T, where τ_T is the Thomson optical
depth, Monte Carlo (MC) simulations are the suitable method
<cit.>.
From all of the calculations mentioned above only <cit.>
provided a method for direct comparison between data and observations with
the corresponding large grids of parameters.
<cit.> performed a first comparison of simulated MC line
profiles to data from the X-ray pulsar A0535+26. By convolving
precalculated Green's functions, describing the response of a CRSF
medium to monoenergetic photon injection, with arbitrary
continua, <cit.> disentangled the time consuming MC
simulations from the choice of continuum. This approach enabled
us to develop a code which can be used to compare MC simulated spectra
to observational data using standard X-ray astronomical data analysis
packages such as XSPEC <cit.> or ISIS <cit.>. A
disadvantage of these earlier calculations, however, is that they are
limited to very simple predefined geometries. In addition, while the
general properties of the line behavior are correct, unfortunately
there was an error in the integration of the scattering cross section
code by <cit.>.
Building on previous results and experience we have developed a new
simulation tool to calculate synthetic line profiles for arbitrarily
complex cylindrically symmetric geometries. This generalization naturally
requires considerable computational effort, which must be met with new
strategies. The mean free path (MFP) tables used for the interpolation
of the thermally averaged scattering cross sections have been
described in paper I. Here, we will provide the description and
applications of the full MC scattering code, which has been written
with the prime goal of imprinting cyclotron lines on the continuum
emission of accreting X-ray pulsars, and which includes a working
fit model. The XSPEC model code and the Green's tables necessary to
generate synthetic cyclotron lines are provided, as well as instructions
on how to use them. The discussion of more sophisticated physical
scenarios and the application to observational data will be the subject of successive papers in this series.
Here, we restrict ourselves to presenting examples of selected
synthetic spectra for illustration and for comparison to other works.
The structure of this paper is as follows. In Sect. <ref>
the physical assumptions are laid out and the simulation process is explained
in detail, including a description of the scattering process and commonly used
geometries. The duration of an exemplary simulation run is analyzed with
respect to the used number of CPUs and computer systems. This is followed by a
description of the Green's function scheme, which is utilized to enable
the fitting of the synthetic cyclotron lines to observational data, taking
a step toward the application of the simulation in Sect. <ref>.
In the same section, a comparison of synthetic CRSFs to previous works is performed.
Furthermore, the Green's functions are applied to a physical continuum
for several combinations of line parameters. Also, the availability of the
XSPEC model and the associated Green's function tables is discussed.
The XSPEC model is fitted to NuSTAR data of Cen X-4
and the resulting magnetic field strength and cyclotron line shape
is compared to results from empirical models.
Additional technical details about the model usage and its parameters are
given in the Appendix.
§ SIMULATION OF CYCLOTRON LINES
In this section we first describe the physical setup and explain our
Monte Carlo approach in detail before discussing how these simulations
can be sped up. We also introduce various classical accretion
geometries which we will be using in Sect. <ref> in
order to validate our results with respect to the ones from earlier works using
the MC method.
§.§ Physical setup
We consider the interactions of photons with electrons in strong
magnetic fields via the cyclotron scattering process. Other particles,
such as protons which might form “proton cyclotron lines”
<cit.> in the spectra at energies below the
electron cyclotron lines, are neglected. There are several other
processes altering the properties and paths of X-ray photons in the
vicinity of magnetic fields near the critical field strength <cit.>,
B_crit=
m_e^2 c^3/eħ = 4.413× 10^13 G .
Photon splitting, which describes the process of splitting one photon
into two photons, dominates over Compton scattering for very high
magnetic field strengths and densities <cit.>. Pair
production becomes possible if the photon energy exceeds 2m_ec^2
<cit.>. Apart from photon-electron interactions,
<cit.> also analyzed the properties of Møller and Bhabha
scattering where electrons are scattered off electrons or positrons,
respectively. In the regime we are interested in, cyclotron scattering
is the dominant process and therefore we will neglect all the other
possible interactions.
As further discussed in paper I, the momenta of the CRSF forming
electrons perpendicular to the direction of the B-field are
quantized to discrete values corresponding to the Landau energy
levels, .
The electrons can move freely parallel to the field
lines. Inelastic scattering of photons off these electrons leads to
the formation of CRSFs which appear as absorption-like lines at the
fundamental Landau energy E_1 and its integer multiples <cit.>.
The cyclotron scattering cross section strongly depends on the
incident photon angle and energy <cit.>.
The angle ϑ_in is measured with respect to the magnetic
field axis and mostly specified by its cosine, μ_in = cosϑ_in.
In the rest frame of
an electron, the photons' energies and propagation angles are Lorentz
boosted due to the electrons' motion parallel to the B-field. This
makes the accurate analytic treatment difficult <cit.>
because a photon's energy and angle are relativistically coupled in
the electron rest frame.
Cyclotron absorption and emission processes as well as resonant
scattering generate highly complex line profiles. In contrast to Compton
scattering which does not absorb or create photons, the number of
photons that contribute to the spectrum is not conserved
<cit.>. A photon can be absorbed by an electron if it has
the right energy and angle to excite the electron to a higher Landau
level. Almost immediately, the electron will emit “spawned” photons
during its successive de-excitation to the ground state. Since
de-excitation preferentially takes place to the next lower Landau
level <cit.>, the majority of spawned photons will have
energies corresponding to the energy difference between neighboring
Landau levels. This is close to the fundamental energy E_1
regardless of the initial Landau level of the electron. The transition
process continues until the ground state is reached, effectively
increasing the number of photons within the medium.
Therefore complex computational
methods are necessary to determine the exact shape of cyclotron
resonance features.
§.§ Monte Carlo simulation
The advantage of a Monte Carlo simulation is that its photon tracing approach
allows for the simulation of the radiative transfer in very complex setups of
the scattering medium. In general, in a MC simulation a seed photon is
generated at a place where the primary photons originate. The photon is
assigned an energy, position, and direction. A random number is then drawn from
an exponential distribution, which depends on the photon mean free path. This
random number is the optical depth that the photon will travel. As discussed
in paper I, in order to speed this step up we use precalculated mean free path
tables (see Fig. <ref> and Sect. <ref> below for
more details).
The scattering geometry is realized in our simulation by describing the
medium with a list of cylinders with arbitrary dimensions, which may be combined
to model all kinds of cylindrically symmetric shapes of accretion columns. The
geometry of each cylinder is parameterized by its inner — to allow for hollow
cylinders — and outer radii and the distances of its bottom and top to the
neutron star surface. The physical properties inside are given by its
homogeneous density[The density is usually calculated from the optical
depth into a given direction and the corresponding extension of the cylinder.
For the classical “” and “” geometries the
direction parallel to the B-field axis is chosen, while in the “cylinder”
geometry the optical depth is defined perpendicular to this axis (see
Fig. <ref>)], magnetic field, electron temperature, and velocity
towards the neutron star. Multiple cylinders can be stacked on top of, or
inside of each other in order to properly simulate parameter gradients. Here,
we still use the simple geometries, which are explained in
Fig. <ref> together with their implied seed emission patterns,
for comparison of our results to <cit.> in Sec. <ref>.
The seed photons in a simulation run result from the
configured photon sources. Different emission patterns are available
for each individual source: a point source emits
photons isotropically from its static origin, a line source
corresponds to photon emission from a line along part of
the magnetic field axis, meaning that the height of each photon above
the hypothetical neutron star is sampled individually, a plane source
describes an emitting plane perpendicular to the B-field axis at a
given height. Variants emitting only photons upwards, that is, in the
direction of the B-field axis, or downwards are available for the
point and plane source types to provide a convenient way of preventing
unprocessed photons from showing up in the resulting data.
The implementation of photon spawning is straightforward: successive photon
generation and propagation of the spawned photons from the coordinates of the
de-exiting electron to the point where the spawned photons are interacting with
or escaping the medium ensure self-consisted treatment.
The most time-intensive parts of processing are parallelized using
the <cit.>, decreasing the required CPU time efficiently as shown in
Fig. <ref>.
Because available computing power has increased significantly since
earlier approaches to this problem, we were also able to introduce
additional features to the simulation which allow us to specify the
conditions to be simulated in a more flexible way. For example, more
complex physical settings for the accretion column, including velocity
gradients, B-field gradients, and an inhomogeneous density
stratification.
§.§ Description of the scattering process
The core of the MC simulation is the propagation of individual photons
through a given CRSF medium, the process of which is illustrated by
Fig. <ref>. Seed photons, generated by the configured
photon sources, must be processed as well as spawned photons,
originating from the transition of an electron to a lower Landau
level. For this purpose the seed photons are injected into the
simulation and photons spawned during their interactions are
added to the current list of photons and processed the same way.
Each photon is propagated according to its current path length, which
is sampled from the mean free path tables provided by
<cit.>. The photon is stored to a file in FITS format if
it escapes the medium and the simulation starts with the next seed
photon. Otherwise the momentum of the interacting electron is sampled,
again using precalculated tables (paper I). We assume that the
electrons are in the ground state, that is, we neglect the
excitation of electrons through other processes than resonant cyclotron
scattering, which is justified by the comparably small time scales of
the electron decay compared to the collisional excitation rate <cit.>.
They can be excited to higher
Landau levels in the course of resonant cyclotron scattering, though. The final Landau level
for each interaction is sampled by comparing the scattering cross
sections for all possible final levels below the maximum Landau level
n_f,max, which has been set to 5 for the simulations
performed here, in compliance with the number of Landau levels taken
into account for the MFP table calculations. In a similar manner, the
scattering angle is sampled by assigning a random number to the
cumulative angular distribution gained by integrating the cross
section over all possible final photon angles. The kinematic
calculations can be carried out once the initial properties of the
interacting electron and photon have been sampled, gaining the final
photon energy and electron momentum. The photon altered by the
scattering process is now further propagated, starting again with the
mean free path interpolation.
The excited electron will produce spawned photons during its
de-excitation to the ground state if it has been excited to a higher
Landau level during the scattering process. The number of spawned
photons depends on the Landau level the electron is excited to, which
is mainly determined by the photon energy in the electron rest frame.
De-excitation preferentially takes place to the next lower Landau
level, producing a photon with an energy corresponding to the energy
difference of the involved Landau levels.
In order to show this quantitatively, we calculated the transition rates for a magnetic field of 0.1 B_crit, averaged over initial and final spin and
summed over final polarization. They are shown in Table <ref> for initial Landau levels
up to n_i = 7. The sixths and seventh levels are provided as a reference and in order to allow
for estimations of the probability that a photon is spawned at the resonance energy of the maximum
Landau level taken into account in the simulation, for instance, via the transition 7 → 2.
The ratio between the rate for a transition from Landau level n_i = 2 to Landau level
n_f = 1, that is, Γ_21≈ 1.74 × 10^-3, and the corresponding
transition to the ground state, Γ_20≈ 2.44 × 10^-4, becomes
Γ_21/Γ_20≈ 7.11 in agreement with the
corresponding calculation by <cit.>. The ratio
is becoming smaller for larger fields as shown by <cit.>, as well.
Although the spacing of the
cyclotron resonances is not perfectly harmonic these photons have
energies very close to the fundamental energy in the electron's rest
frame. For Landau levels above the first excited state an electron
emits multiple photons during its successive de-excitation to the next
lower level until the ground state is reached. The spawned photons
have to be boosted to the neutron star frame of reference before they are further
processed, since the electron that re-emits a photon has some velocity
component parallel to the magnetic field due to the electrons' thermal momentum
distribution (see paper I).
The simulation code features the possibility to configure an additional velocity
component of the simulated medium. This leads to an additional boosting factor
for all photons entering the medium and can be used to simulate the bulk velocity
of the accreting matter. This has been
used by <cit.> for the calculation of phase dependent cyclotron
line spectra and emission patterns from a cylindrically symmetric accretion
column throughout which such a flow with relativistic velocities is expected
<cit.>.
We neglect bulk motion in this work because it is unnecessary for the
purpose of code verification. For the application of the CRSF model to Cep X-4
the omission of bulk velocity is justified by the chosen geometry: in “slab”
geometries the line forming region is assumed to reside at the neutron star hot
spot at which the matter has been decelerated to non-relativistic velocities.
The boosting caused by the thermal momentum distribution of the electrons, together with
the slightly anharmonic spacing of the Landau levels, is responsible
for the formation of line wings around the fundamental cyclotron line.
This apparent excess of photons occurs at energies slightly
above or below the first resonance E_1. The intermediate Landau
levels occupied by the electron during this process are sampled by
making use of the corresponding cyclotron decay rates. Each transition
produces a photon with an energy equal to the energy difference of the
Landau levels. Its angle is sampled according to the angle
dependent decay rates. All spawned photons are further propagated in
the same way as the seed photons until they leave the medium.
§.§ Classic geometries
Figure <ref> depicts various geometries which differ in
radius to height ratio and the position, and type, of the seed photon source.
Slab geometries mimic a thin scattering volume of infinite radius. We
approximate this in our numerical simulations by setting the radius to a value
very large compared to the height of the medium. We find empirically that a
factor of 1000 between the radius and the height is a good choice in the
simulations. The output spectrum does not change significantly for a larger
extension in the radial direction, that is, only few photons, if any,
travel that far perpendicular to the B-field axis without escaping through
the bottom or top of the slab. The height of the slab is specified in terms of
the Thomson optical depth. The density in the slab can therefore be calculated in
the simulation from the slab height or radius, for /
or geometry, respectively, and the corresponding optical depth. For slab
geometries the optical depth parallel to the magnetic field is used to describe
the medium, because the slab radius is infinite or at least much larger than
its height. For a cylinder geometry the optical depth perpendicular to the
magnetic field axis is used.
Using the notation of <cit.>, in the
geometry the medium is solely illuminated from the bottom, while in
the geometry a source plane in the middle produces seed
photons within the medium.
In the cylinder geometry the medium's radius is much smaller than its
height. Therefore, the optical depth perpendicular to the B-field
axis, that is, the cylinder radius in units of optical depths is used
as a parameter to describe the medium instead. Here, the seed photons
are also produced within the column but only along the cylinder's
axis.
In this work we show spectra for and
geometry in Fig. <ref> and spectra for cylinder
geometry in Fig. <ref>.
The CRSF shape is, in general, highly
sensitive to the simulated geometry. The formation of line wings, for
example, is especially pronounced in geometry for
viewing angles almost parallel to the magnetic field, which can be
seen in Fig. <ref>.
§.§ CPU time
We reduce the calculation time by using MPI to
distribute the work among multiple processors.
Figure <ref> shows that our simulation is parallelized
efficiently. We performed tests on different computer systems and
found that using ∼8 CPUs for each simulation provides a
convenient compromise between simulation time and efficiency.
§.§ Green's table approach
Performing a time consuming simulation for each generation of a
spectrum is not practicable in anticipation of our goal to fit
simulated spectra to observational data. A large number of simulation
runs will be required where the shape of the emerging spectrum is
simulated as a function of the input parameters. In addition to the
magnetic field strength and the plasma temperature, the spectral shape
of the incident photons is also variable. This significantly increases
the computing time as potentially a large parameter space needs to be
covered. As it was shown previously <cit.>,
this large expense of computing time can be avoided by calculating the
Green's function of the radiative transfer problem. We propagate
photons through a medium with a given geometry, magnetic field
strength, and temperature for a well sampled grid of mono-energetic
seed photon energies. The photons escaping the medium are collected
and binned to spectra for a grid of output angles. These emerging
spectra, normalized to the seed photon flux in a given
angular bin, correspond to the Green's function of the radiative
transfer problem. The emerging spectrum for an arbitrary seed photon
spectrum, like a cutoff power law continuum or a continuum due to
Comptonization in a radiative shock <cit.>, can then be
obtained by convolving it with the Green's functions.
In general, our approach requires to interpolate the Green's
functions for the energies on which the seed photon continuum is
defined. This step is necessary because these energies are typically
defined by the response matrix of the detector with which the data
were taken. This energy grid is outside of our control. We have
calculated grids with a logarithmic energy spacing that is fine enough
that linear interpolation is sufficient to re-grid the Green's
functions in energy.
A second interpolation step in the magnetic field strength is required
to obtain a Green's function for a B-field value not covered by the
grid of precalculated values. Since the Green's functions are
self-similar in E/B we interpolate the Green's functions for
different B in this “energy shifted” system. In a similar way
further interpolations in temperature, output angle, and optical depth
are used to approximate the final spectrum for parameter combinations
off the grid points.
Using MFP interpolation tables, we are able to produce Green's
functions for a large parameter range and with a resolution better
than the resolution of all currently flying X-ray telescopes for the
energy range where cyclotron lines are observed. This takes about
300 CPU hours for one geometry on rather coarse grained magnetic
field and temperature grids. In principle the MFP table mechanism, and
therefore the creation of Green's tables, is reasonable for magnetic
fields 0.01 ≲ B/B_crit≲ 0.12 and
temperatures 0 keV≤ T ≲ 50 keV. For
much smaller B-fields the fully relativistic quantum-electro-dynamic
calculations are unnecessarily complex, while for much higher fields
cyclotron scattering may not be the dominant process <cit.>. Much higher
temperatures are not expected to occur in accreting X-ray
binaries. For the testing of different geometries and other parameters
we currently concentrate on an optimized subset in order to save CPU
time. Table <ref> lists the parameters we have
calculated Green's function tables for. The Green's functions needed
for the evaluation of model spectra for arbitrary parameters within
the precalculated ranges are interpolated from these tables using the
methods given in Appendix <ref>. The Appendix also
describes the extrapolation beyond the covered parameter regime.
However, this extrapolation should be taken with extreme caution.
§ APPLICATION
We now compare
our results to those of previous works and discuss the generation of
synthetic CRSF spectra for a physical continuum. We use end-to-end
comparison to previous works since we do not have access to their
intermediate data products such as mean free path or momentum sampling
information. A comparison to a NuSTAR observation of Cep X-4
is performed to show the applicability of the model to real data and
as a demonstration of the new CRSF fitting model.
§.§ Code verification
In order to validate the general simulation,
Fig. <ref> compares spectra obtained using our
simulation with earlier calculations by <cit.> for two
different accretion column geometries. Four different angular
regimes are shown. The upper left plot shows the simulated
spectra for a viewing angle almost parallel to the magnetic field
axis. The lower right plot shows spectra for viewing angles
almost perpendicular to the B-field axis. The corresponding angle
bins are defined in terms of the cosine of the viewing angle
μ = cosϑ.
The calculations agree well: the line width decreases with increasing
viewing angle, while the depth of the line decreases with decreasing viewing
angle. Indications for a third harmonic line can be seen in spectra
emerging perpendicular to the magnetic field, while for viewing angles
parallel to the magnetic field only a complex fundamental line is
observed. The strong line wings become less pronounced for
continua with a high energy cutoff, as less photons are spawned from
the higher harmonics <cit.>:
Previous works based on the numerically calculated cyclotron cross
sections obtained by <cit.>, such as <cit.> and
<cit.>, do not show such a good agreement with the
spectra from <cit.> due to their usage of erroneous
integrated cross sections in the simulation code <cit.>.
Some slight deviations remain in the line wings. These are formed by
spawned photons. In the spectra of <cit.> a larger number
of spawned photons escape at small angles to the magnetic field. These
extra photons appear at larger angles in our simulation. The reason
for this disagreement is probably a different angular redistribution
scheme or the approximation of the total
scattering cross section used by <cit.>
for modeling the resonant scattering at harmonics above the first one.
<cit.> use this approximation <cit.>, which is strictly valid only near
the line center, for resonant scattering involving the excitations of
higher harmonics. They resort to a numerical integration over the
scattering angle of the more correct form given in their Eq. 10
<cit.>
for transitions involving only the ground state and the first excited
state.
Although the discussion of cyclotron resonant scattering as a cooling
process for the electrons in the accretion column is beyond the scope
of this paper, we briefly give reasoning for the two different
temperatures used in the comparison. The electron temperature depends
on the photon interactions which in turn depend on the electron
temperature. In the regime where cyclotron resonant scattering is the
dominant cooling process, a convenient choice for an equilibrium
temperature is the temperature at which the energy transfer is zero.
The point of equilibrium depends on the geometry.
Figure <ref> therefore assumes temperatures of
7 keV and 8 keV for the and
geometries, respectively, following <cit.>. These
temperatures correspond to the Compton temperature where the energy
transfer is minimal. <cit.> used this technique to show that
the ratio of this equilibrium temperature to the magnetic field is
fairly constant if the temperature is determined solely due to
cyclotron resonant scattering.
§.§ Cyclotron lines for the <cit.> continuum
Figure <ref> shows synthetic cyclotron lines imprinted on the
continuum from <cit.> for the purpose of illustrating
the influence of the continuum on the CRSFs and as an example
for the cylinder geometry.
The continuum parameters are the same ones as in the theoretical calculation
from <cit.> for Her X-1. This calculation agrees very well
with the BeppoSAX data reported by <cit.>. A spectral fit
of a recent XSPEC implementation of the same continuum model to
NuSTAR data of Her X-1 can be found in the work by
<cit.>.
Figure <ref> shows that the lines become deeper with increasing
optical depth. The line width decreases with increasing viewing angle
to the magnetic field, because of the smaller influence of Lorentz
boosting in this regime. A cylinder geometry Green's table has been
used for imprinting the cyclotron lines on the continuum. This
approach leads to emission like behavior for very small optical depths
and large angles to the magnetic field. Two higher harmonics can
clearly be seen and are especially pronounced for large angles and
relatively high optical depths of τ = 3 ·
10^-3 τ_T.
These are theoretical spectra, which in contrast to observed spectra
correspond to a line forming region with a constant magnetic field
seen from a specific angle. Observations are expected to be smeared
out due to an angle mixing with phase and an extended line forming
region with a magnetic field gradient. Though the accurate handling of
angle mixing requires the inclusion of relativistic effects
<cit.>, its influence on the CRSF line profiles, in the
reference frame of the neutron star, can be roughly estimated by averaging over
multiple viewing angles. Appendix <ref> provides details
on how this can be done easily with the model presented in the
following. All continua used in this work are angle averaged
but the combination of the model with angle dependent continua is
straight forward.
§.§ The CRSF model and table availability
Together with this work we distribute improved Green's function tables
for the classical geometries, which can be used together with an XSPEC
local model, , to imprint cyclotron lines on arbitrary
continua. See Appendix <ref> for a description of the
model. Table <ref> shows the parameter ranges
covered by these tables. Each table corresponds to one geometry. The
tables have been calculated using thermally and polarization averaged
cross sections.
The model convolves the Green's table set by the user with the
given continuum. It can, in principle, extrapolate beyond the
ranges provided in Table <ref>, but this should be
used with care and currently triggers a warning message. The model and
preliminary Green's function tables are available online (see link in
footnote, page 1). The currently rather coarse grained parameter grids
will be refined successively and will be made available at the same
location.
§.§ Fitting Cep X-4
In order to demonstrate the applicability of the model to observed data, a
comparison between empirically fitted line profiles with synthetic ones from
the simulation described above will be performed in the following. The results
further motivate the necessity of a cyclotron line model based on firm physical
grounds for the description of CRSF line profiles.
Cep X-4, also known as GS 2138+56, was discovered in
1972 June and July observations with the OSO-7 X-ray
telescope <cit.>. It is an accreting high mass X-ray binary
(HMXB) with a pulse period of ∼66 s as found
in its 1988 outburst with Ginga
<cit.> and later confirmed by <cit.>. The optical
counter part is a Be star <cit.> at a distance of 3.8 ±
0.6 kpc <cit.>. <cit.> found that the
addition of a cyclotron line at 31 keV improved the fit
but they did not include it in their discussion for the sake of
comparison to other cataloged X-ray pulsars. <cit.> proposed
a cyclotron line at an energy of 30.5 ± 0.4 keV and
deduced a magnetic field of 2.6 × 10^12 (1+z) G
for the source.
RXTE/PCA observed further outbursts in 1997 and 2002.
<cit.> performed a spectral and timing analysis of the
latter outburst and found a cyclotron line at
30.7^+1.8_-1.9 keV. NuSTAR
<cit.> observed Cep X-4 close to the maximum of
the outburst and during its decline on 2014 June 18/19 and 2014 July
1/2, respectively <cit.>. The cyclotron line was measured at
an energy of ∼30 keV in both observations. <cit.>
found that its shape deviates from a simple Gaussian line profile <cit.>,
which makes this observation a good candidate for an application of
the physical CRSF model described in this work.
We re-extracted the NuSTAR data from ObsID 80002016002 (2014
June 18/19, exposure time 40.5 ks) near the maximum of the outburst
using the same settings and
procedure as described by <cit.> but using the CalDB version
20160922 and the NuSTAR data analysis software (NuSTARDAS)
version 1.6.0 as distributed with HEASOFT 6.19. The source and
background spectra for focal plane module A and B (FPMA and FPMB) were
extracted separately, using circular regions with radii of 120.
We use data in the 3.6–55 keV band and rebinned the data to a
signal-to-noise (S/N) of 10 below 45 keV and a S/N of 5 above that.
We use the same empirical continuum model as <cit.>,
which was already found by <cit.> to describe the continuum
well, that is, a power-law with a Fermi-Dirac cutoff <cit.>,
F(E) = A E^-Γ1/1 + e^-(E - E_cut) / E_fold ,
with normalization constant A, photon index Γ, cutoff energy E_cut,
and folding energy E_fold.
An improved version of the model, namely [
<http://pulsar.sternwarte.uni-erlangen.de/wilms/research/tbabs/>], is used
with abundances by <cit.> and cross sections by <cit.> in
order to account for photoelectric absorption of the continuum. A narrow iron
Kα line has been used as in the analysis by <cit.>. Their low
energy black body was dismissed because it did not improve the fit. Also,
contrary to the analysis by , but in agreement with the
continuum model used by <cit.>, we added a “10 keV” feature, that
is, a Gaussian emission line that facilitates the modeling of the continuum in
the ∼8–20 keV range, which has been applied before, for various sources
and instruments <cit.>. We found the width of this broad
emission component to be in agreement with but much better constrained than the
corresponding component used by <cit.>. The centroid energy of 16.1
/ 1.3 = 12.4 keV is below the value of 14.4 keV found by
<cit.>. The additional factor of 1.3 is necessary for the comparison
because of the gravitational redshift: in contrast to previous modeling
approaches, our model includes a redshifting of all components but the iron
line with a fixed redshift of z = 0.3 <cit.>. The
XSPEC convolution model is used for that purpose.
The luminosity of 1–6 × 10^36 erg s^-1 <cit.>
is low compared to other accreting X-ray pulsars exhibiting cyclotron
lines <cit.> and well below the theoretical limit where a
shock in the accretion column is expected to form <cit.>.
This suggests the usage of a slab geometry for the CRSF model,
, which is the only model that we use to fit the CRSF
line profile, meaning that we do not use any other absorption line
component.
The complete fit model used is thus:
,
where is the cross-calibration of
focal plane module FPMB relative to FPMA.
Starting with the parameters obtained from the fits by
<cit.> and <cit.>, the physical CRSF model for a
slab 1-0 geometry
converged towards an acceptable fit with a reduced χ^2 of 1.17
for 862 degrees of freedom.
The unfolded spectrum is shown in Fig. <ref>a together with
the model for both detectors, FPMA and FPMB, and the corresponding
residuals are shown in Fig. <ref>b. The best fit parameters can be found in
Table <ref>, with uncertainties given at the 90% confidence
level.
As discussed by <cit.>, the position of the cyclotron line
can be affected by the continuum model. This complicates the
comparison of the magnetic field strength to previous works.
<cit.> noticed an absorption feature at ∼
31 keV using a power-law plus exponential cutoff.
<cit.> used a power-law times cyclotron scattering cutoff
<cit.> to describe the continuum and found the feature to
reside at 30.5 ± 0.4 keV. Using a model consisting of
negative and positive power-laws with a common exponential cutoff
<cit.>, <cit.> detected the cyclotron
line at 28.8 ± 0.4 keV.
<cit.> included a fundamental cyclotron line at 27.5 ± 0.4 keV,
27.7 ± 0.4 keV, or 29.6 ± 0.5 keV in their
best-fit models for an NPEX, CompTT, or Fermi-Dirac plus blackbody continuum model,
respectively, in their analysis of the Suzaku observation of Cep X-4
in 2014. The corresponding widths of the Gaussian absorption
lines used for fitting the CRSFs differ from each other as well between
the different continuum models.
<cit.> and
<cit.>, both using a Fermi-Dirac continuum model, found cyclotron lines at
30.7_-1.9^+1.8 keV and
30.39_-0.14^+0.17 keV (and
29.42_-0.24^+0.27 keV during the decline of the 2014
outburst), respectively.
The analyses of all these works differ by more than the continuum model: some
use additional model components to model the continuum and/or asymmetries in
the fundamental line profile, different models are used to describe the line
shape including Gaussian absorption lines and pseudo-Lorentzian profiles such
as , different instruments might be responsible for systematic
deviations, and the physical behaviour of the source itself, such as variations
of the height of the line forming region with luminosity <cit.>, might lead to different magnetic field strengths — and
therefore differences of the measured cyclotron line energy — between
observations. The range of the values from the previous works listed above,
from ∼28 keV to ∼31 keV, and differences of
∼8% resulting from the application of different continuum models to
the same observation further illustrate the incomparableness of
cyclotron line energies resulting from differing analyses.
In order to compare the physical line model with an empirical model of the line
shape on the basis of the same continuum, we replaced the
model by a multiplicative absorption model, namely , at the energy
where such a simple absorption line would be expected for the best-fit magnetic
field strength, that is, E_gabs = B_×
11.57 keV≈ 41.96^+0.35_-0.36 keV in the frame of the neutron
star (i.e., before redshifting)
[Note that the physical cyclotron line model has been
substituted in place, that is, the model is still within the
redshift model and therefore shifted to ∼32.3 keV
by .].
Leaving this energy and all other parameters frozen while fitting the width and
the depths of the empirical absorption line results in a reduced χ^2 of
1.57 for 876 degrees of freedom. The corresponding residuals are shown in
Fig. <ref>c, which clearly illustrates that the centroid energy of the
Gaussian absorption line model is off the cyclotron line. Only when all parameters of the
model are left free, can the profile of the cyclotron line be represented
by a Gaussian absorption line with a centroid energy of 40.01_-0.14^+0.15 keV before
redshift (red. χ^2 of 1.17 for 875 d.o.f.).
The value of E = 40.01 keV / (1 + z) = 30.78_-0.11^+0.12 keV after redshift reduction is almost consistent with the value
found by <cit.> for the higher energetic Gaussian absorption line used to model
the asymmetric line shape. The ratio of this model to the best-fit model using
the physical CRSF model is shown in Fig. <ref>d. Evidently, the shapes
of the line models differ significantly as expected.
The model for Cep X-4 presented here makes no claim of
uniqueness. Instead many assumptions are made: other geometries might
fit the spectrum equally well, neglecting bulk velocity becomes
questionable if either the continuum or the cyclotron line are formed
in a region of the accretion column with a significant velocity, and
the usage of an empirical continuum with a “10 keV” feature —
the origin of which is unclear — is questionable per se,
to name just a few. Furthermore, only one viewing angle to the
magnetic field axis is taken into account, which is unrealistic considering
that the data are averaged over pulse phase. The model
provides the possibility to average over multiple angles in order to
overcome this inaccuracy albeit in an approximative way (see Sect. <ref>
for details). The width of the cyclotron line is strongly affected by
the viewing angle and the temperature. Here, the width is mainly fitted
by the angle to the magnetic field axis. Magnetic field, temperature,
and velocity gradients are neglected, though they might largely influence
the CRSF line width as well. Studies with more complex configurations
of the CRSF medium are needed for estimating their influence quantitatively.
Physical continuum models should be combined with the physical model for the
CRSF line shape presented here. Their combined application to many observations
of diverse sources covering a large parameter space of both continuum and line
profile model parameters — some of these parameters might be tied together
— might help to further constrain the highly degenerate parameter regime.
§ SUMMARY
We have described our Monte Carlo code for the generation of synthetic
cyclotron lines. The simulated line profiles have complex shapes and show a
strong dependency on the viewing angle. We have compared our results to the
work from <cit.> and find an overall good agreement for both, the
and the geometry.
In order to show the influence of the continuum, the Her X-1 model
spectrum from <cit.> has been used to generate synthetic spectra for
several optical depths and angles to the magnetic field.
A new XSPEC fitting model, , which works on precalculated
tables storing the response of the Monte Carlo simulation to monoenergetic
photon injections, has been introduced and is available online (see link in
footnote, page 1).
Using this model, which is describing the cyclotron line shape on physical
grounds, we successfully fitted NuSTAR spectra of Cep X-4.
The resulting magnetic field strength, B = 3.6 × 10^12 G,
has been found to differ significantly from fits with a Gaussian absorption
line. The reason for this difference might lie in the theoretically complex
profile <cit.> of the fundamental
cyclotron line. This might lead to a different best fit value for the magnetic
field strength — even for the almost symmetrical and smooth line shapes seen
in observations — if the modeled CRSF shape is taken into account. Further
studies of more complex physical setups, including the exploration of other
geometries and parameter gradients, the inclusion of angular mixing due to
relativistic light bending, and a combination with an equally physical
continuum model are necessary in order to obtain a fully self consistent
spectrum of the accretion column. The simulation code presented here provides
the flexibility to address these challenges. The associated XSPEC model
mechanism allows for making the corresponding results available in an easily
usable and familiar way.
Other applications of the new code include the simulation of observable
<cit.> phase lags at the CRSF energy <cit.>, a
combination with models for relativistic light bending to obtain self
consistent pulse profiles of the phase dependent CRSF behavior (Falkner et al.,
2016, in prep.), comparisons to observational data from other sources, and a
study of the dependence of the CRSF profile on the magnetic field geometry.
This work has been partially funded by the Deutsche Forschungsgemeinschaft
under DFG grant number WI 1860/11-1 and by the Deutsches Zentrum für Luft-
und Raumfahrt under DLR grant numbers 50 OR 1113, 50 OR 1207, 50 OR 1410,
and 50 OR 1411. We also acknowledge the Russian Foundation for Basic Research
grant number 14-02-91345. MTW is supported by the Chief of Naval Research and
by the National Aeronautics and Space Administration Astrophysical Data
Analysis Program. We thank the International Space Science Institute in Bern
for inspiring team meetings. The fruitful discussions within the MAGNET
collaboration also had a very positive impact on this work. The figures in this
work have been produced using the package by <cit.>.
aa
§ THE XSPEC MODEL
Our XSPEC model uses precalculated Green's function tables to imprint
synthetic CRSFs on arbitrary continua. These Green's tables have
been calculated from the response of a medium to monoenergetic photon
injection on a wide range of parameters. Therefore each geometry has
its own Green's function table, which can be selected for use by
* setting the environment variable CYCLOFS_TABLE to the
table's location
* defining the table location as initialization string in the
file
Table <ref> lists all model parameters. At least one
optical depth, the magnetic field, the temperature, and the angle or
the angular range and the number of angular points must be set for a
successful model evaluation.
§.§ Interpolation
The model can be configured to interpolate between parameter values in different ways depending on the
parameter: currently a linear interpolation scheme is utilized for all parameters.
Model spectra are also extrapolated, if the desired parameter
combination is not covered by the Green's table. A linear scheme is used
here, as well.
The model prints out a warning message if
extrapolation is used. The extrapolation method used for the optical depth is
slightly different from the others: For a given optical depth out of the table
range, the model convolves its output iteratively with a suitable optical depth
within the table range. This extrapolation-via-successive-convolution method
turned out to be as accurate as linear extrapolation close to the boundaries
but yields much better results than any other method for extrapolation over
orders of magnitude. This is especially useful for studies in the regime of
high optical depths where calculation time increases significantly. The
accuracy of such an extrapolation over orders of magnitude is questionable,
though, as it depends on the assumption of isotropic angular redistribution,
which is normally not justified, as we have shown before <cit.>.
It is, nevertheless, very useful for studying the influence of increased line
depth on the overall combined model flux and therefore included as default
behavior with a warning message.
§.§ Angular averaging
The model is designed to calculate spectra for exactly one viewing angle,
defined by its cosine mu= cosϑ, to the
magnetic field axis. It provides the possibility for averaging over a range
of angles to the B-field axis, by returning the mean value of
mu_N spectra between mu_min and mu_max for each
energy bin. If mu_N is set to 0, only one spectrum for the angle
mu is calculated. If it is set 1, one spectrum right in the middle of
the angular range specified by mu_min and mu_max, that is,
for an angle μ = 0.5 (μ_min + μ_max), is returned. A
mu_N value of 2 will result in a spectrum averaged from two points,
namely mu_min and mu_max. For higher values, the additional
points are equally spread over the angular range. Note that the parameter
mu is not used at all if mu_N is larger than zero and should
be frozen during fitting in order to avoid useless iterations. The same applies
to the parameters mu_min and mu_max in the case of
mu_N = 0.
|
http://arxiv.org/abs/1701.07507v3 | 20170125222320 | The Fifth Moment of modular L-functions | [
"Eren Mehmet Kiral",
"Matthew P. Young"
] | math.NT | [
"math.NT"
] |
theoremTheorem
lemma[theorem]Lemma
prop[theorem]Proposition
defnDefinition
cor[theorem]Corollary
exExercise
definition[1][Definition]
#1
*remarkRemark
|
http://arxiv.org/abs/1701.07624v2 | 20170126092712 | Two-dimensional Fermi gas in antiparallel magnetic fields | [
"Takaaki Anzai",
"Yusuke Nishida"
] | cond-mat.quant-gas | [
"cond-mat.quant-gas"
] |
Department of Physics, Tokyo Institute of Technology,
Ookayama, Meguro, Tokyo 152-8551, Japan
We study a two-dimensional Fermi gas with an attractive interaction subjected to synthetic magnetic fields assumed to be mutually antiparallel for two different spin components.
By employing the mean-field approximation, we find that its phase diagram at zero temperature consists of pair superfluid and quantum spin Hall insulator phases and closely resembles that of the Bose-Hubbard model.
The resulting two phases are separated by a second-order quantum phase transition classified into the universality class of either the dilute Bose gas or the XY model.
We also show that the pairing gap can be enhanced significantly by the antiparallel magnetic fields as a consequence of magnetic catalysis, which may facilitate the realization of the pair superfluid in two dimensions by ultracold atom experiments.
Two-dimensional Fermi gas in antiparallel magnetic fields
Yusuke Nishida
January 2017
=========================================================
§ INTRODUCTION
After the first realization of the Bose-Einstein condensation (BEC) in ultracold atomic gases <cit.>, overwhelming experimental progress has been made to allow us to control system parameters at will <cit.>.
For example, the interaction between atoms can be controlled by varying a magnetic field through a Feshbach resonance <cit.>.
This technique applied to two-component Fermi gases led to the realization of a crossover to a Bardeen-Cooper-Schrieffer (BCS) superfluid of Cooper pairs from a BEC of tightly bound molecules <cit.>.
Furthermore, the dimensionality can be controlled by confining atoms with an optical lattice generated by two counterpropagating laser beams <cit.>.
Therefore, the BCS-BEC crossover in two dimensions (2D) has also come close to the reach of experimental investigation <cit.>, which may provide important insights into layered superconductors <cit.>.
More recently, enormous research efforts have been devoted to develop experimental techniques to create synthetic magnetic fields <cit.>.
One approach is to couple internal states of atoms by laser beams so that neutral atoms behave like charged particles in a magnetic field <cit.>, which opened up a new avenue toward the realization of quantum Hall physics with ultracold atoms.
This approach was further extended to create “antiparallel” magnetic fields, which act on two different spin components of atoms with the same magnitude but in opposite directions <cit.>.
While they were implemented for a two-component Bose gas of rubidium atoms to observe a spin Hall effect, the same technique is in principle applicable to two-component Fermi gases.
There have also been proposals to create antiparallel magnetic fields by inducing laser-assisted tunneling in a tilted optical lattice <cit.>, aiming at the realization of the quantum spin Hall (QSH) effect <cit.>.
Motivated by these experimental abilities to control the interaction, dimensionality, and magnetic fields, we study a 2D Fermi gas with an attractive interaction between two spin components in antiparallel magnetic fields.
While the attractive interaction generally favors the Cooper pairing, the antiparallel magnetic fields lead to Landau-level formation with opposite chiralities for different spin components of fermions.
How do they compete or cooperate to give rise to interesting physics?
This is the subject to be elucidated in this Rapid Communication.
Besides its own importance, our system may also be viewed as a simulator of analogous phenomena in other fields, such as exciton condensation and chiral condensation in a magnetic field, where two particles forming a pair have opposite charges and thus experience opposite Lorentz forces <cit.>.
For related theoretical works on 2D Bose and 3D Fermi gases as well as in an optical lattice, see Refs. <cit.>.
In what follows, we set ħ=1 and denote the magnetic length and the cyclotron frequency by ℓ_B≡1/√(B) and ω_B≡ B/m, respectively.
We also use shorthand notations, (x)≡(τ,), ∫dx≡∫_0^βdτ∬_0^Ld^2, (k)≡(ω_n,k_x,l), ∑_k≡∑_ω_n∑_k_x∑_l=0^∞, and δ_kk'≡δ_ω_nω'_nδ_k_xk'_xδ_ll', where ω_n≡2π(n+1/2)/β and k_x≡2π n/L are the Matsubara frequency and the wave number, respectively, and l=0,1,2,… labels Landau levels.
The inverse temperature β and the linear size of the system L are formally kept finite in Sec. <ref>, while the zero temperature and thermodynamic limits β,L→∞ are taken at the end.
§ FUNCTIONAL INTEGRAL FORMULATION
Let us start with spin-1/2 fermions in 2D subjected to spin-dependent vector potentials, which are described by the following imaginary-time action:
S = ∑_σ=,∫dx ϕ_σ^*(x)
[_̣τ+[-i+_σ()]^2/2m-μ]ϕ_σ(x)
- g∫dx ϕ_^*(x)ϕ_^*(x)ϕ_(x)ϕ_(x).
Here m and μ are the mass and chemical potential common to both spin components of fermions and the coupling constant g>0 is assumed to be attractive.
We also choose the vector potentials as
_(x) = -_(x) = -By,
so that different spin components experience antiparallel magnetic fields with the magnitude B>0; ×_(x)=-×_(x)=B.
Note that this particular way of introducing the magnetic fields preserves the time-reversal symmetry as well as the spin conservation.
To facilitate our theoretical analysis, it is convenient to employ the Hubbard-Stratonovich transformation <cit.>,
S' = ∫dx |Δ(x)|^2/g - ∫dx Φ^(x)
×[ -_̣τ-[-i+_()]^2/2m+μ Δ(x); Δ^*(x) -_̣τ+[-i-_()]^2/2m-μ ]Φ(x),
so that the interaction term in the action is decoupled at the expense of introducing the pair field Δ(x).
We then expand the Nambu-Gor'kov spinor Φ(x)≡[ϕ_(x),ϕ_^*(x)]^T over the eigenfunctions of the single-particle Hamiltonian.
Here the eigenfunction in the Landau gauge is
χ_k(x) ≡e^-iω_nτ+ik_xx/√(β L) F_l(y-k_xℓ_B^2)
with
F_l(y) ≡e^-(y/ℓ_B)^2/2/√(2^ll!π^1/2ℓ_B)
H_l(y/ℓ_B)
being the lth eigenfunction of the harmonic oscillator, which solves the Schrödinger equation [(-i-By)^2/(2m)]χ_k(x)=_lχ_k(x) with the single-particle energy provided by _l≡(l+1/2)ω_B <cit.>.
By substituting the resulting expansion Φ(x)=∑_kχ_k(x)Φ̃(k), the action is now expressed as
S' = ∫dx |Δ(x)|^2/g
- ∑_k∑_k'Φ̃^(k)G^-1(k,k')Φ̃(k'),
where the inverse Nambu-Gor'kov propagator is defined by
G^-1(k,k') ≡[ (iω_n-ξ_l)δ_kk' Δ̃(k,k'); Δ̃^*(k',k) (iω_n+ξ_l)δ_kk' ]
with ξ_l≡_l-μ and Δ̃(k,k')≡∫dx χ_k^*(x)Δ(x)χ_k'(x).
Finally, by integrating out the fermion fields, we obtain the effective action written in terms of the pair field:
S_ = ∫dx |Δ(x)|^2/g - ln[G^-1(k,k')].
While this expression is formally exact with the understanding of the renormalization procedure discussed below, some approximation needs to be employed to proceed.
We first employ the mean-field approximation in Sec. <ref> to investigate the phase diagram at zero temperature and then employ the Ginzburg-Landau expansion in Sec. <ref> to elucidate the universality class of quantum phase transitions therein.
§ MEAN-FIELD PHASE DIAGRAM AT ZERO TEMPERATURE
§.§ Phase boundary
To investigate the phase diagram at zero temperature, we employ the mean-field approximation.
By setting Δ(x)=Δ_0>0 and thus Δ̃(k,k')=Δ_0δ_kk', the thermodynamic potential Ω_≡ S_/(β L^2) is found to be
Ω_ = Δ_0^2/g
- mω_B/2π∑_l=0^∞(E_l-ξ_l)θ(Λ-_l),
where E_l≡√(ξ_l^2+Δ_0^2) is the quasiparticle energy and an energy cutoff Λ is introduced because the second term turns out to be logarithmically divergent.
This logarithmic divergence should be canceled by the same form of divergence hidden in the coupling constant <cit.>:
1/g = m/2π∫_0^Λdϵ/2+_b.
Here _b>0 has the physical meaning of the binding energy of a two-body bound state in the vacuum without magnetic fields, which always exists for any g>0 in 2D and thus can be used to parametrize the attraction <cit.>.
By separating out the divergent piece from the second term, combining it with the first term, and then taking the limit of Λ→∞, the thermodynamic potential is made manifestly cutoff independent as
Ω_ = mΔ_0^2/4π[ln(2ω_B/_b)
+ ψ(1/2-μ/ω_B)]
- mω_B/2π∑_l=0^∞(E_l-ξ_l-Δ_0^2/2ξ_l),
where ψ(z) is the digamma function.
The order parameter Δ_0 is determined so as to minimize the thermodynamic potential (<ref>).
For any fixed chemical potential μ/ω_B but not right at a Landau level (i.e., ξ_l≠0 for all l∈ℕ_0), we find a second-order quantum phase transition from a normal state with Δ_0=0 to a superfluid state with Δ_0>0 by increasing the two-body binding energy _b/ω_B.
The resulting phase boundary is obtained by solving ^̣2Ω_/Δ̣_0^2=0 at Δ_0=0 [cf. Eq. (<ref>) below], which leads to
ϵ_b/ω_B
= 2exp[-ψ(1/2-μ/ω_B)
+ 2ψ(1/2-μ/ω_B+ν)]
with ν≡⌊μ/ω_B+1/2⌋θ(μ) being the filling factor per spin.
As one can see from the phase diagram depicted in Fig. <ref>, the normal state with Δ_0=0 is divided into different phases corresponding to different filling factors ν=0,1,2,….
The number density therein is provided by n=(mω_B/π)ν and thus the phase for ν=0 is just the vacuum where no particles are present.
On the other hand, for ν>0, each spin component of fermions fills ν Landau levels so that our system becomes the QSH insulator composed of two quantum Hall states with opposite chiralities for different spin components <cit.>.
The rest of the phase diagram is occupied by the pair superfluid state with the nonzero pairing gap Δ_0>0, whose behavior shall be investigated further.
§.§ Pairing gap
The BCS-BEC crossover in the superfluid phase is described by the number density equation,
n = -Ω̣_/μ̣
= mω_B/2π∑_l=0^∞(1-ξ_l/E_l),
together with the gap equation, Ω̣_/Δ̣_0=0:
ln(2ω_B/_b) + ψ(1/2-μ/ω_B)
- ω_B∑_l=0^∞(1/E_l-1/ξ_l) = 0.
In the strong-coupling limit ϵ_b→∞, we have -μ≫Δ_0 so that the number density equation reduces to n≃(m/4π)Δ_0^2/(-μ), while the gap equation reduces to ln(-2μ/ϵ_b)+(Δ_0/μ)^2/4≃0.
Therefore, we find
μ→_F - ϵ_b/2 and Δ_0 →√(2_Fϵ_b)
with _F≡π n/m being the Fermi energy, which coincide with the results for the 2D BCS-BEC crossover without magnetic fields <cit.>.
This is understandable because our system in the strong-coupling limit consists of tightly bound spin-singlet molecules, for which the antiparallel magnetic fields cancel out.
On the other hand, the superfluid phase can persist down to the weak-coupling limit ϵ_b→0 only when the chemical potential lies right at a Landau level, i.e., ξ_l=0 for some l∈ℕ_0.
The number density in this case reduces to n→(mω_B/π)(l+1/2), while the gap equation is solved by
Δ_0/ω_B→1/ln(2ω_B/_b)-2γ-ψ(l+1).
Therefore, we find that the pairing gap in terms of the small coupling constant (<ref>) is expressed as
Δ_0/ω_B→mg/4π,
which exhibits the remarkable linear dependence in contrast to the usual exponential dependence without magnetic fields; Δ_0∝ e^-2π/(mg) <cit.>.
This results from the divergent density of states at Landau levels and such an enhancement of dynamical symmetry breaking by magnetic fields is generally referred to as “magnetic catalysis” <cit.>.
The pairing gap beyond the weak-coupling limit is plotted in Fig. <ref>, which is obtained by solving the coupled Eqs. (<ref>) and (<ref>).
Here one can see that the pairing gap is indeed enhanced significantly by the antiparallel magnetic fields, which may facilitate the realization of the pair superfluid in 2D by ultracold atom experiments.
§ UNIVERSALITY CLASS OF QUANTUM PHASE TRANSITIONS
The phase diagram obtained in the previous section closely resembles that of the Bose-Hubbard model which consists of superfluid and Mott insulator phases <cit.>.
Here it was revealed that the quantum phase transition between them is classified into the universality class of either the dilute Bose gas or the XY model.
This fact and the mutual resemblance motivate us to elucidate the universality class of the quantum phase transition in our system.
In the vicinity of the quantum phase transition, the effective action (<ref>) can be expanded with respect to the pair field Δ(x) assumed to be small and smooth.
By keeping terms up to the quartic order in Δ(x) and the quadratic order in derivatives, the Ginzburg-Landau action after some straightforward calculations is found to be
S_ = ∫dx [a_1Δ^*(x)_̣τΔ(x) + a_2|_̣τΔ(x)|^2
+ b_2|Δ(x)|^2 + c_2|Δ(x)|^2 + c_4|Δ(x)|^4] .
Here an unimportant constant term is dropped and the other coefficients are provided by
a_1 = -m/8πω_B[ψ'(1/2-μ/ω_B)
- 2ψ'(1/2-μ/ω_B+ν)],
a_2 = c_4 = m/32πω_B^2[ψ”(1/2-μ/ω_B)
- 2ψ”(1/2-μ/ω_B+ν)],
b_2 = μ/8πω_B^2 [2ψ(1/2-μ/ω_B)
- ψ(-μ/ω_B) - ψ(1-μ/ω_B)
- 4ψ(1/2-μ/ω_B+ν)
+ 2ψ(-μ/ω_B+ν)
+ 2ψ(1-μ/ω_B+ν)] ,
c_2 = m/4π[ln(2ω_B/_b)
- ψ(1/2-μ/ω_B)
+ 2ψ(1/2-μ/ω_B+ν)].
While a_2, b_2, and c_4 are always positive, c_2 changes its sign when the phase boundary located at Eq. (<ref>) is crossed.
Furthermore, because we can find the relationship,
a_1 = -1/2c̣_2/μ̣,
a_1 vanishes at c̣_2/μ̣=0 corresponding to the tip of each QSH phase marked by the red dot in Fig. <ref>.
When a_1=0, the Ginzburg-Landau action (<ref>) is invariant under the exchange of Δ(x)↔Δ^*(x), which signals the particle-hole symmetry emergent in the low-energy limit.
Accordingly, the equation of motion obeyed by the pair field, δ S_/δΔ^*(x)=0, becomes the nonlinear Klein-Gordon equation, which is relativistic and Lorentz invariant.
Here the quantum phase transition turns out to be in the universality class of the XY model <cit.>.
On the other hand, away from the tip of the QSH phase, the a_2 term in the action is negligible with respect to the nonvanishing a_1 term.
In this case, the equation of motion is the usual Gross-Pitaevskii equation and thus the quantum phase transition falls into the universality class of the dilute Bose gas <cit.>.
§ CONCLUSION AND OUTLOOK
In this Rapid Communication, we studied competition and cooperation between the attractive interaction and the antiparallel magnetic fields in a 2D Fermi gas.
When the chemical potential does not match any Landau levels, the antiparallel magnetic fields compete with the Cooper pairing, i.e., our system becomes a QSH insulator and can be a pair superfluid only by a sufficient attraction (see Fig. <ref>).
By employing the mean-field approximation at zero temperature, we found that these two phases are separated by a second-order quantum phase transition.
Its universality class turns out to be of the XY model at the tip of each QSH phase and of the dilute Bose gas elsewhere, which closely resembles the phase diagram of the Bose-Hubbard model.
On the other hand, when the chemical potential matches some Landau level, the antiparallel magnetic fields in turn cooperate with the Cooper pairing, i.e., not only our system can be a pair superfluid by an infinitesimal attraction, but also the pairing gap is significantly enhanced (see Fig. <ref>).
In particular, we showed that the usual exponential dependence on the small coupling constant is promoted to the remarkable linear dependence as a consequence of magnetic catalysis.
Although the role of fluctuations still needs to be understood, our finding here may facilitate the realization of the pair superfluid in 2D by ultracold atom experiments.
As for a future work, we plan to extend our study to finite temperature as well as to finite density imbalance between two different spin components of fermions.
In particular, it was revealed in the absence of magnetic fields that the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state, where the Cooper pairing takes place with nonzero momentum <cit.>, emerges in the phase diagram of an imbalanced 2D Fermi gas <cit.>.
It must be worthwhile to elucidate how the antiparallel magnetic fields compete or cooperate with the FFLO state.
The authors thank Ippei Danshita and Shunsuke Furukawa for valuable discussions.
This work was supported by JSPS KAKENHI Grants No. JP15K17727 and No. JP15H05855, as well as International Research Center for Nanoscience and Quantum Physics, Tokyo Institute of Technology.
99
Anderson:1995
M. H. Anderson, J. R. Ensher, M. R. Matthews, C. E. Wieman, and E. A. Cornell,
“Observation of Bose-Einstein condensation in a dilute atomic vapor,”
https://doi.org/10.1126/science.269.5221.198
Science 269, 198-201 (1995).
Bradley:1995
C. C. Bradley, C. A. Sackett, J. J. Tollett, and R. G. Hulet,
“Evidence of Bose-Einstein condensation in an atomic gas with attractive interactions,”
https://doi.org/10.1103/PhysRevLett.75.1687
Phys. Rev. Lett. 75, 1687-1690 (1995).
Davis:1995
K. B. Davis, M.-O. Mewes, M. R. Andrews, N. J. van Druten, D. S. Durfee, D. M. Kurn, and W. Ketterle,
“Bose-Einstein condensation in a gas of sodium atoms,”
https://doi.org/10.1103/PhysRevLett.75.3969
Phys. Rev. Lett. 75, 3969-3973 (1995).
Bloch:2008
I. Bloch, J. Dalibard, and W. Zwerger,
“Many-body physics with ultracold gases,”
https://doi.org/10.1103/RevModPhys.80.885
Rev. Mod. Phys. 80, 885-964 (2008).
Inouye:1998
S. Inouye, M. R. Andrews, J. Stenger, H.-J. Miesner, D. M. Stamper-Kurn, and W. Ketterle,
“Observation of Feshbach resonances in a Bose-Einstein condensate,”
https://doi.org/10.1038/32354
Nature (London) 392, 151-154 (1998).
Courteille:1998
Ph. Courteille, R. S. Freeland, D. J. Heinzen, F. A. van Abeelen, and B. J. Verhaar,
“Observation of a Feshbach resonance in cold atom scattering,”
https://doi.org/10.1103/PhysRevLett.81.69
Phys. Rev. Lett. 81, 69-72 (1998).
Regal:2004
C. A. Regal, M. Greiner, and D. S. Jin,
“Observation of resonance condensation of fermionic atom pairs,”
https://doi.org/10.1103/PhysRevLett.92.040403
Phys. Rev. Lett. 92, 040403 (2004).
Zwierlein:2004
M. W. Zwierlein, C. A. Stan, C. H. Schunck, S. M. F. Raupach, A. J. Kerman, and W. Ketterle,
“Condensation of pairs of fermionic atoms near a Feshbach resonance,”
https://doi.org/10.1103/PhysRevLett.92.120403
Phys. Rev. Lett. 92, 120403 (2004).
Bloch:2005
I. Bloch,
“Ultracold quantum gases in optical lattices,”
https://doi.org/10.1038/nphys138
Nat. Phys. 1, 23-30 (2005).
Modugno:2003
G. Modugno, F. Ferlaino, R. Heidemann, G. Roati, and M. Inguscio,
“Production of a Fermi gas of atoms in an optical lattice,”
https://doi.org/10.1103/PhysRevA.68.011601
Phys. Rev. A 68, 011601(R) (2003).
Martiyanov:2010
K. Martiyanov, V. Makhalov, and A. Turlapov,
“Observation of a two-dimensional Fermi gas of atoms,”
https://doi.org/10.1103/PhysRevLett.105.030404
Phys. Rev. Lett. 105, 030404 (2010).
Frohlich:2011
B. Fröhlich, M. Feld, E. Vogt, M. Koschorreck, W. Zwerger, and M. Köhl,
“Radio-frequency spectroscopy of a strongly interacting two-dimensional Fermi gas,”
https://doi.org/10.1103/PhysRevLett.106.105301
Phys. Rev. Lett. 106, 105301 (2011).
Dyke:2011
P. Dyke, E. D. Kuhnle, S. Whitlock, H. Hu, M. Mark, S. Hoinka, M. Lingham, P. Hannaford, and C. J. Vale,
“Crossover from 2D to 3D in a weakly interacting Fermi gas,”
https://doi.org/10.1103/PhysRevLett.106.105304
Phys. Rev. Lett. 106, 105304 (2011).
Feld:2011
M. Feld, B. Fröhlich, E. Vogt, M. Koschorreck, and M. Köhl,
“Observation of a pairing pseudogap in a two-dimensional Fermi gas,”
https://doi.org/10.1038/nature10627
Nature (London) 480, 75-78 (2011).
Sommer:2012
A. T. Sommer, L. W. Cheuk, M. J. H. Ku, W. S. Bakr, and M. W. Zwierlein,
“Evolution of fermion pairing from three to two dimensions,”
https://doi.org/10.1103/PhysRevLett.108.045302
Phys. Rev. Lett. 108, 045302 (2012).
Vogt:2012
E. Vogt, M. Feld, B. Fröhlich, D. Pertot, M. Koschorreck, and M. Köhl,
“Scale invariance and viscosity of a two-dimensional Fermi gas,”
https://doi.org/10.1103/PhysRevLett.108.070404
Phys. Rev. Lett. 108, 070404 (2012).
Zhang:2012
Y. Zhang, W. Ong, I. Arakelyan, and J. E. Thomas,
“Polaron-to-polaron transitions in the radio-frequency spectrum of a quasi-two-dimensional Fermi gas,”
https://doi.org/10.1103/PhysRevLett.108.235302
Phys. Rev. Lett. 108, 235302 (2012).
Koschorreck:2012
M. Koschorreck, D. Pertot, E. Vogt, B. Fröhlich, M. Feld, and M. Köhl,
“Attractive and repulsive Fermi polarons in two dimensions,”
https://doi.org/10.1038/nature11151
Nature (London) 485, 619-622 (2012).
Baur:2012
S. K. Baur, B. Fröhlich, M. Feld, E. Vogt, D. Pertot, M. Koschorreck, and M. Köhl,
“Radio-frequency spectra of Feshbach molecules in quasi-two-dimensional geometries,”
https://doi.org/10.1103/PhysRevA.85.061604
Phys. Rev. A 85, 061604(R) (2012).
Frohlich:2012
B. Fröhlich, M. Feld, E. Vogt, M. Koschorreck, M. Köhl, C. Berthod, and T. Giamarchi,
“Two-dimensional Fermi liquid with attractive interactions,”
https://doi.org/10.1103/PhysRevLett.109.130403
Phys. Rev. Lett. 109, 130403 (2012).
Makhalov:2014
V. Makhalov, K. Martiyanov, and A. Turlapov,
“Ground-state pressure of quasi-2D Fermi and Bose gases,”
https://doi.org/10.1103/PhysRevLett.112.045301
Phys. Rev. Lett. 112, 045301 (2014).
Ong:2015
W. Ong, C. Cheng, I. Arakelyan, and J. E. Thomas,
“Spin-imbalanced quasi-two-dimensional Fermi gases,”
https://doi.org/10.1103/PhysRevLett.114.110403
Phys. Rev. Lett. 114, 110403 (2015).
Ries:2015
M. G. Ries, A. N. Wenz, G. Zürn, L. Bayha, I. Boettcher, D. Kedar, P. A. Murthy, M. Neidig, T. Lompe, and S. Jochim,
“Observation of pair condensation in the quasi-2D BEC-BCS crossover,”
https://doi.org/10.1103/PhysRevLett.114.230401
Phys. Rev. Lett. 114, 230401 (2015).
Murthy:2015
P. A. Murthy, I. Boettcher, L. Bayha, M. Holzmann, D. Kedar, M. Neidig, M. G. Ries, A. N. Wenz, G. Zürn, and S. Jochim,
“Observation of the Berezinskii-Kosterlitz-Thouless phase transition in an ultracold Fermi gas,”
https://doi.org/10.1103/PhysRevLett.115.010401
Phys. Rev. Lett. 115, 010401 (2015).
Fenech:2016
K. Fenech, P. Dyke, T. Peppler, M. G. Lingham, S. Hoinka, H. Hu, and C. J. Vale,
“Thermodynamics of an attractive 2D Fermi gas,”
https://doi.org/10.1103/PhysRevLett.116.045302
Phys. Rev. Lett. 116, 045302 (2016).
Boettcher:2016
I. Boettcher, L. Bayha, D. Kedar, P. A. Murthy, M. Neidig, M. G. Ries, A. N. Wenz, G. Zürn, S. Jochim, and T. Enss,
“Equation of state of ultracold fermions in the 2D BEC-BCS crossover region,”
https://doi.org/10.1103/PhysRevLett.116.045303
Phys. Rev. Lett. 116, 045303 (2016).
Cheng:2016
C. Cheng, J. Kangara, I. Arakelyan, and J. E. Thomas,
“Fermi gases in the two-dimensional to quasi-two-dimensional crossover,”
https://doi.org/10.1103/PhysRevA.94.031606
Phys. Rev. A 94, 031606(R) (2016).
Randeria:1989
M. Randeria, J.-M. Duan, and L.-Y. Shieh,
“Bound states, Cooper pairing, and Bose condensation in two dimensions,”
https://doi.org/10.1103/PhysRevLett.62.981
Phys. Rev. Lett. 62, 981-984 (1989).
Randeria:1990
M. Randeria, J.-M. Duan, and L.-Y. Shieh,
“Superconductivity in a two-dimensional Fermi gas: Evolution from Cooper pairing to Bose condensation,”
https://doi.org/10.1103/PhysRevB.41.327
Phys. Rev. B 41, 327-343 (1990).
Dalibard:2011
J. Dalibard, F. Gerbier, G. Juzeliūnas, and P. Öhberg,
“Colloquium: Artificial gauge potentials for neutral atoms,”
https://doi.org/10.1103/RevModPhys.83.1523
Rev. Mod. Phys. 83, 1523-1543 (2011).
Goldman:2014
N. Goldman, G. Juzeliūnas, P. Öhberg, and I. B. Spielman,
“Light-induced gauge fields for ultracold atoms,”
https://doi.org/10.1088/0034-4885/77/12/126401
Rep. Prog. Phys. 77, 126401 (2014).
Lin:2009
Y.-J. Lin, R. L. Compton, K. Jiménez-García, J. V. Porto, and I. B. Spielman,
“Synthetic magnetic fields for ultracold neutral atoms,”
https://doi.org/10.1038/nature08609
Nature (London) 462, 628-632 (2009).
Beeler:2013
M. C. Beeler, R. A. Williams, K. Jiménez-García, L. J. LeBlanc, A. R. Perry, and I. B. Spielman,
“The spin Hall effect in a quantum gas,”
https://doi.org/10.1038/nature12185
Nature (London) 498, 201-204 (2013).
Aidelsburger:2013
M. Aidelsburger, M. Atala, M. Lohse, J. T. Barreiro, B. Paredes, and I. Bloch,
“Realization of the Hofstadter Hamiltonian with ultracold atoms in optical lattices,”
https://doi.org/10.1103/PhysRevLett.111.185301
Phys. Rev. Lett. 111, 185301 (2013).
Kennedy:2013
C. J. Kennedy, G. A. Siviloglou, H. Miyake, W. C. Burton, and W. Ketterle,
“Spin-orbit coupling and quantum spin Hall effect for neutral atoms without spin flips,”
https://doi.org/10.1103/PhysRevLett.111.225301
Phys. Rev. Lett. 111, 225301 (2013).
Kane:2005a
C. L. Kane and E. J. Mele,
“Quantum spin Hall effect in graphene,”
https://doi.org/10.1103/PhysRevLett.95.226801
Phys. Rev. Lett. 95, 226801 (2005).
Kane:2005b
C. L. Kane and E. J. Mele,
“Z_2 topological order and the quantum spin Hall effect,”
https://doi.org/10.1103/PhysRevLett.95.146802
Phys. Rev. Lett. 95, 146802 (2005).
Bernevig:2006
B. A. Bernevig and S.-C. Zhang,
“Quantum spin Hall effect,”
https://doi.org/10.1103/PhysRevLett.96.106802
Phys. Rev. Lett. 96, 106802 (2006).
Eisenstein:2004
J. P. Eisenstein and A. H. MacDonald,
“Bose-Einstein condensation of excitons in bilayer electron systems,”
https://doi.org/10.1038/nature03081
Nature (London) 432, 691-694 (2004).
Miransky:2015
V. A. Miransky and I. A. Shovkovy,
“Quantum field theory in a magnetic field: From quantum chromodynamics to graphene and Dirac semimetals,”
https://doi.org/10.1016/j.physrep.2015.02.003
Phys. Rep. 576, 1-209 (2015).
Furukawa:2014
S. Furukawa and M. Ueda,
“Global phase diagram of two-component Bose gases in antiparallel magnetic fields,”
https://doi.org/10.1103/PhysRevA.90.033602
Phys. Rev. A 90, 033602 (2014).
Feng:2015
B. Feng, D.-f. Hou, and H.-c. Ren,
“Magnetic and inverse magnetic catalysis in the Bose-Einstein condensation of neutral bound pairs,”
https://doi.org/10.1103/PhysRevD.92.065011
Phys. Rev. D 92, 065011 (2015).
Iskin:2015
M. Iskin,
“Attractive Hofstadter-Hubbard model with imbalanced chemical and vector potentials,”
https://doi.org/10.1103/PhysRevA.91.053606
Phys. Rev. A 91, 053606 (2015).
Altland-Simons
See, for example, A. Altland and B. D. Simons,
Condensed Matter Field Theory, 2nd ed. (Cambridge University Press, Cambridge, UK, 2010).
Landau-Lifshitz
See, for example, L. D. Landau and L. M. Lifshitz,
Quantum Mechanics, 3rd ed. (Butterworth-Heinemann, Oxford, UK, 1977).
Marini:1998
M. Marini, F. Pistolesi, and G. C. Strinati,
“Evolution from BCS superconductivity to Bose condensation: Analytic results for the crossover in three dimensions,”
https://doi.org/10.1007/s100510050165
Eur. Phys. J. B 1, 151-159 (1998).
Fisher:1989
M. P. A. Fisher, P. B. Weichman, G. Grinstein, and D. S. Fisher,
“Boson localization and the superfluid-insulator transition,”
https://doi.org/10.1103/PhysRevB.40.546
Phys. Rev. B 40, 546-570 (1989).
Sachdev
See, for example, S. Sachdev,
Quantum Phase Transitions, 2nd ed. (Cambridge University Press, Cambridge, UK, 2011).
Fulde:1964
P. Fulde and R. A. Ferrell,
“Superconductivity in a strong spin-exchange field,”
https://doi.org/10.1103/PhysRev.135.A550
Phys. Rev. 135, A550-A563 (1964).
Larkin:1965
A. I. Larkin and Yu. N. Ovchinnikov,
“Inhomogeneous state of superconductors,”
Sov. Phys. JETP 20, 762-769 (1965).
Conduit:2008
G. J. Conduit, P. H. Conlon, and B. D. Simons,
“Superfluidity at the BEC-BCS crossover in two-dimensional Fermi gases with population and mass imbalance,”
https://doi.org/10.1103/PhysRevA.77.053617
Phys. Rev. A 77, 053617 (2008).
Yin:2014
S. Yin, J.-P. Martikainen, and P. Törmä,
“Fulde-Ferrell states and Berezinskii-Kosterlitz-Thouless phase transition in two-dimensional imbalanced Fermi gases,”
https://doi.org/10.1103/PhysRevB.89.014507
Phys. Rev. B 89, 014507 (2014).
Sheehy:2015
D. E. Sheehy,
“Fulde-Ferrell-Larkin-Ovchinnikov state of two-dimensional imbalanced Fermi gases,”
https://doi.org/10.1103/PhysRevA.92.053631
Phys. Rev. A 92, 053631 (2015).
Toniolo:2017
U. Toniolo, B. Mulkerin, X.-J. Liu, and H. Hu,
“Larkin-Ovchinnikov superfluidity in a two-dimensional imbalanced atomic Fermi gas,”
https://doi.org/10.1103/PhysRevA.95.013603
Phys. Rev. A 95, 013603 (2017).
|
http://arxiv.org/abs/1701.07494v1 | 20170125213718 | Non-Stoquastic Interactions in Quantum Annealing via the Aharonov-Anandan Phase | [
"Walter Vinci",
"Daniel A. Lidar"
] | quant-ph | [
"quant-ph"
] | |
http://arxiv.org/abs/1701.08092v5 | 20170127155551 | Double-sided probing by map of Asplund's distances using Logarithmic Image Processing in the framework of Mathematical Morphology | [
"Guillaume Noyel",
"Michel Jourlin"
] | cs.CV | [
"cs.CV",
"math.NA"
] |
Map of Asplund's metrics & Mathematical Morphology
Guillaume Noyel et al.
Guillaume Noyel, and Michel Jourlin
International Prevention Research Institute, 95 cours Lafayette, 69006 Lyon, France
Lab. H. Curien, UMR CNRS 5516, 18 rue Pr. B. Lauras, 42000 St-Etienne, France
<www.i-pri.org>
Double-sided probing by map of Asplund's distances using Logarithmic Image Processing in the framework of Mathematical Morphology
Guillaume Noyel1 Michel Jourlin1,2
December 30, 2023
=================================================================================================================================
We establish the link between Mathematical Morphology and the map of Asplund's distances between a probe and a grey scale function, using the Logarithmic Image Processing scalar multiplication. We demonstrate that the map is the logarithm of the ratio between a dilation and an erosion of the function by a structuring function: the probe. The dilations and erosions are mappings from the lattice of the images into the lattice of the positive functions. Using a flat structuring element, the expression of the map of Asplund's distances can be simplified with a dilation and an erosion of the image; these mappings stays in the lattice of the images. We illustrate our approach by an example of pattern matching with a non-flat structuring function.
§ INTRODUCTION
Asplund's metric is a useful method of pattern matching based on a double-sided probing, i.e. a probing by a greatest lower bound probe and a least upper bound probe. It was originally defined for binary shapes, or sets <cit.>, by using the smallest homothetic shape (probe) containing the shape to be analysed and the greatest homothetic probe contained by the shape. Jourlin et al. <cit.> have extended this metric to functions and to grey-level images in the framework of the Logarithmic Image Processing (LIP) <cit.> using a multiplicative or an additive LIP law <cit.>. Then, Asplund's metric has been extended to colour and multivariate images by a marginal approach in <cit.> or by a spatio-colour (i.e. vectorial <cit.>) approach in <cit.>.
Other approaches of double-sided probing have been previously defined in the framework of Mathematical Morphology <cit.>. The well-known hit-or-miss transform <cit.> allows to extract all the pixels such that the first set of a structuring element fits the object while the second set misses it (i.e. fits its background). An extension based on two operations of dilation (for grey level images) has been proposed in <cit.>. It consists of a unique structuring element, which is used in the two dilations in order to match the signal from above and from below.
Banon et al. <cit.> use two structuring elements obtained by two translations of a unique template along the grey level axis. They use an erosion and an anti-dilation to count the pixels whose values are in between the two structuring elements.
Odone et al. <cit.> use an approach inspired by the computation of the Hausdorff distance. They consider a grey level image as a tridimensional (3D) graph. They dilate by a 3D ball a template in order to compute a 3D “interval”. Then, for any point of the image, they translate vertically the “interval” in order to contain the maximum number of points of the function and they count this number.
Barat et al. <cit.> present a unified framework for these last three methods. They show that they correspond to a neighbourhood of functions (i.e. a tolerance tube) with a different metric for each method. Their topological approach is named virtual double-sided image probing (VDIP) and they defined it as a difference between a grey-scale dilation and an erosion. For pattern matching, only the patterns which are in the tolerance tube are selected. It is a metric defined on the equivalence class of functions according to an additive grey level shift.
In <cit.>, Jourlin et al. have introduced the logarithmic homothetic defined according to the LIP multiplication. This makes a compensation of the lighting variation due to a multiplicative effect, i.e. a thickening or a thinning of the object crossed by the light.
In the current paper, the important novelty introduced is the link between the map of Asplunds' metrics defined in the LIP multiplicative framework and the operations of Mathematical Morphology. We will show that the map of Asplunds' distances in the LIP multiplicative framework is the logarithm of the ratio between a dilation and an erosion.
This gives access to many other notions well defined in the corpus of Mathematical Morphology.
The paper is organised as follows: 1) a reminder of the main notions (LIP, Asplund's metrics, fundamental operations and framework of Mathematical Morphology), 2) the demonstration of the link between the map of Asplund's distances and Mathematical Morphology for flat structuring element (se) and for non-flat ones and 3) an illustration of pattern matching with Asplund's metric.
§ PREREQUISITES
In the current section, we remind the different mathematical notions and frameworks to be used: LIP model, Asplund's metric and the basis of Mathematical Morphology.
§.§ Logarithmic Image Processing (LIP)
The LIP model, created by Jourlin et al. <cit.>, is a mathematical framework for image processing based on the physical law of transmittance. It is perfectly suited to process images acquired with transmitted light (when the object is located between the source and the sensor) but also with reflected light, due to the consistency of the model with human vision <cit.>.
The mathematical operations performed using the LIP model are consistent with the physical principles of image formation. Therefore the values of an image defined in [0, M[ stay in this bounded domain. For 8 bits images M=256 and the 256 grey levels are in the range of integers [0, ..., 255].
A grey scale image f is a function defined on a domain D ⊂^N with values in = [0,M[, M ∈. f is a member of the space = ^D.
Due to the link with the transmittance law: T_f = 1-f/M, the grey-scale is inverted in the LIP framework, 0 corresponds to the white extremity of the grey scale, when no obstacle is located between the source and the sensor, while the other extremity M corresponds to the black value, when the source cannot be transmitted through the obstacle.
In the LIP sense, the addition of two images corresponds to the superposition of two obstacles (objects) generating f and g:
f g = f + g - f.g/M
From this law, we deduce the LIP multiplication of f by a scalar ∈:
λ f = M - M ( 1 - f/M)^λ
It corresponds to a thickness change of the observed object in the ratio λ. If λ>1, the thickness is increased and the image becomes darker than f, while if λ∈ [0,1[, the thickness is decreased and the image becomes brighter than f.
The LIP laws satisfy strong mathematical properties. Let ℱ(D,[-∞,M[) be the set of functions defined on D with values in ]-∞,M[. We equipped it with the two logarithmic laws and (ℱ(D,[-∞,M[),,) becomes a real vector space. (,,) is the positive cone of this vector space <cit.>.
There exists a colour version of the LIP model <cit.>.
The LIP framework has been successfully applied to numerous problems for industry, medical applications, digital photography, etc. It gives access to new notions of contrast and metrics which take into account the variation of light, for example the Asplund's metric for functions.
§.§ Asplund's metric for functions using the LIP multiplicative law
Let us remind the novel notion of Asplund's metric defined in <cit.> for functions in place of sets. It consists of using the logarithmic homothetic f.
Let ^* = ]0,M[ and the space of positive images be ^*=^*^D.
Asplund's metric
Given two images f, g ∈^*, g is chosen as the probing function for example, and we define the two
numbers: λ = inf{α, f ≤α g } and μ = sup{β, β g ≤ f}.
The corresponding “functional Asplund's metric” d_As^ (with the LIP multiplication) is:
d_As^(f,g) = ln( λ / μ)
From a mathematical point of view <cit.>, d_As^ is a metric if the images f, g ∈^* are replaced by their equivalence classes f^={g / ∃ k >0, k g = f } and g^.
The relation (∃ k >0, k g = f) is clearly an equivalence relation written fRg, because it satisfies the three properties: i) reflexivity ∀ f ∈, fRf, ii) symmetry ∀ (f,g) ∈^2, fRg ⇔ gRf and iii) transitivity ∀ (f,g,h) ∈^3, (fRg and gRh) ⇒ fRh. Let us now give a rigorous definition of the multiplicative Asplund's metric using the space of equivalence classes ^
∀ (f^,g^) ∈ (I^)^2 , d_As^(f^,g^) = d_As^(f_1,g_1)
d_As^(f_1,g_1) is defined by eq. <ref> between two elements f_1 and g_1 of the equivalence classes f^ and g^.
The demonstration of the metric properties are in the appendix (section <ref>).
Several examples have shown the interest of using Asplund's metric for pattern matching between a template function t: D_t → and the function f. For each point x of D, the distance d_As^ (f_|D_t(x).,t) is computed in the neighbourhood D_t(x) centred in x, with f_|D_t(x). being the restriction of f to D_t(x). Therefore, one can define a map of Asplund's distances <cit.>.
Map of Asplund's distances
Given a grey-level image f ∈^* and a probe t ∈ (^*)^D_t, t>0, their map of Asplund's distances is:
As_t^ f : {[ ^* × (^*)^D_t → (^+)^D; (f,t) → As_t^ f(x) = d_As^ (f_|D_t(x).,t); ].
D_t(x) is the neighbourhood associated to D_t centred in x ∈ D.
One can notice that the template t is acting like a structuring element.
§.§ Short reminder on Mathematical Morphology
In this subsection we give a reminder of the basis notions used in Mathematical Morphology (MM) <cit.>.
MM is defined in complete lattices <cit.>.
Complete lattice
Given a set and a binary relation ≤ defining a partial order on , we say that (,≤) is a partially ordered set or poset. is a complete lattice if any non empty subset 𝒳 of has a supremum (a least upper bound) and an infimum (a greatest lower bound). The infimum and the supremum will be denoted, respectively, by ∧𝒳 and ∨𝒳. Two elements of the complete lattice are important: the least element O and the greatest element I.
The set of images from D to [0,M], =[0,M]^D, is a complete lattice with the partial order relation ≤, by inheritance of the complete lattice structure of [0,M]. The least and greatest elements are the constant functions f_0 and f_M whose values are equal respectively to 0 and M for all elements of D. The supremum and infimum are respectively, for any 𝒳⊂
[ (∧_𝒳)(x) = ∧_[0,M]{ f(x):f ∈𝒳, x∈ D }; (∨_𝒳)(x) = ∨_[0,M]{ f(x):f ∈𝒳, x∈ D }.; ]
The set of functions ^D is also a complete lattice with = ∪{-∞ , +∞} with the usual order ≤, like the set of all subsets of D, written 𝒫(D), with the set inclusion ⊂.
Erosion, dilation, anti-erosion, anti-dilation <cit.>
Given _1 and _2 two complete lattices, a mapping ψ∈_2^_1 is
* an erosion iff ∀𝒳⊂_1, ψ( ∧𝒳 ) = ∧ψ( 𝒳 ), then we write ε = ψ;
* a dilation iff ∀𝒳⊂_1, ψ( ∨𝒳 ) = ∨ψ( 𝒳 ), then we write δ = ψ;
* an anti-erosion iff ∀𝒳⊂_1, ψ( ∧𝒳 ) = ∨ψ( 𝒳 ), then we write ε^a = ψ;
* an anti-dilation iff ∀𝒳⊂_1, ψ( ∨𝒳 ) = ∧ψ( 𝒳 ), then we write δ^a = ψ.
As the definitions of these mappings apply even to the empty subset of _1, we have: ε(I)=I, δ(O)=O, ε^a(I)=O and δ^a(O)=I.
Erosions and dilations are increasing mappings: ∀𝒳, 𝒴⊂_1, 𝒳≤𝒴⇒ψ(𝒳) ≤ψ(𝒴) while anti-erosions and anti-dilations are decreasing mappings: ∀𝒳, 𝒴⊂_1, 𝒳≤𝒴⇒ψ(𝒴) ≤ψ(𝒳).
Structuring element <cit.>
Let us define a pulse function i_x,t∈ of level t at the point x:
i_x,t(x)=t; i_x,t(y)=0 if x ≠ y.
The function f can be decomposed into the supremum of its pulses f = ∨{i_x,f(x),x ∈ D }.
It is easy to define dilations and erosions which are not translation-invariant (in the domain D). Let W be a map → associating to each pulse function i_x,t∈ a (functional) “window” W(i_x,t). Then the operator δ_W: → defined by:
δ_W(f) = ∨{ W(i_x,f(x)) , x ∈ D }
is a dilation. When all `windows” W(i_x,f(x)) are translation invariant (in D), they take the form W(i_x,f(x))=B(x) with B(x)=B_x being a structuring element (or structuring function).
In this case the previously defined dilation δ and erosion ε, in the same lattice (,≤), can be simplified:
[ (δ_B(f))(x) = ∨{ f(x - h) + B(h), h ∈ D_B } = (f ⊕ B) (x); (ε_B(f))(x) = ∧{ f(x + h) - B(h), h ∈ D_B } = (f ⊖ B) (x); ]
D_B ⊂ D is the definition domain of the structuring function B: D_B →. The symbols ⊕ and ⊖ represent the extension to functions <cit.> of Minkowski operations between sets <cit.>.
Notice: in the case of a flat structuring element with its values equal to zero (i.e. ∀ x ∈ D_B, B(x)=0), we have δ_B(f)(x) = ∨{ f(x - h) , h ∈ D_B }= δ_D_B(f)(x) and ε_B(f)(x) = ∧{ f(x + h) , h ∈ D_B }= ε_D_B(f)(x).
§ MAP OF APLUND'S DISTANCES AND MATHEMATICAL MORPHOLOGY
We now link the map of Asplund's distances with Mathematical Morphology.
Given ^+=[0,+∞] a complete lattice with the natural order ≤, the map of the least upper bounds _B between the probe B ∈ (^*)^D_B and the function f ∈ is defined as:
_B f : {[ × (^*)^D_B → (^+)^D; (f,B) → _B f(x) = ∧{α(x), f(x+h) ≤α(x) B(h), h ∈ D_B }.; ].
The map of the greatest lower bounds μ_B between the probe B ∈ (^*)^D_B and the function f ∈ is defined as:
μ_B f : {[ × (^*)^D_B → (^+)^D; (f,B) → μ_B f(x) = ∨{β(x), β(x) B(h) ≤ f(x+h) , h ∈ D_B }.; ].
The two mappings _B and μ_B are defined between two complete lattices _1 = ( , ≤) and _2 = ((^+)^D,≤) with the natural order ≤.
Therefore, the least element of (_1 , ≤) corresponds to the constant function equal to zero, O = f_0 and the greatest element is the constant function equal to M, I = f_M.
Using the equations <ref> and <ref> the map of Asplund's distances (eq. <ref>) can be simplified:
As_B^f = ln( _B f /μ_B f ), with f > 0.
In addition, ∀ x ∈ D, ∀ h ∈ D_B, ∀α∈^+, we have:
[ α(x) B(h) ≥ f(x+h) ⇔ M - M ( 1 - B(h)/M )^α(x)≥ f(x+h), (from eq. <ref>); ⇔ α(x) ≥ln( 1 - f(x+h)/M)/ln( 1 - B(h)/M), because ( 1 - B(h)/M) ∈ ]0,1[. ]
We assume that f= ln( 1 - f/M ). Using equation <ref>, equation <ref> becomes:
_B f = ∧{α(x), α(x) ≥f(x+h) /B(h) , h ∈ D_B }
= ∨{f(x+h) /B(h) , h ∈ D_B }.
In a similar way:
μ_B f = ∨{β(x), β(x) ≤f(x+h) /B(h) , h ∈ D_B }
= ∧{f(x+h) /B(h) , h ∈ D_B }.
§.§ Case of a flat structuring element
In the case of a flat structuring element B=B_0 ∈^* (∀ x ∈ D_B, B(x) = B_0), the equations <ref> and <ref> can be simplified.
[ _B_0 f = 1/B_0∧{f(x+h) , h ∈ D_B }, because B_0 < 0; = 1/B_0ln( 1 - ∨{ f(x-h) , -h ∈ D_B }/ M ); = 1/B_0ln( 1 - δ_D_B f / M ) ]
Notice: the infimum ∧ is changed into a supremum ∨ because the function f: x →ln(1-x/M) is a continuous decreasing mapping.
The reflected (or transposed) domain D_B is D_B = {-h, h∈ D_B } and the reflected structuring function B is defined by the reflection of its definition domain ∀ x ∈D_B, B(x)=B(-x) <cit.>.
Similarly:
[ μ_B_0 f = 1/B_0∨{f(x+h) , h ∈ D_B }, because B_0 < 0; = 1/B_0ln( 1 - ∧{ f(x+h) , h ∈ D_B }/ M ); = 1/B_0ln( 1 - ε_D_B f / M ) . ]
With the equations <ref> and <ref>, the map of Asplund's distances (eq. <ref>) becomes:
As_B_0^f = ln[ ln( 1 - δ_D_B f / M ) /ln( 1 - ε_D_B f / M ) ] , with f > 0.
This important result shows that, with a flat probe, the map of Aplünd's distances can be computed using logarithms and operations of morphological erosion and dilation of an image. From an implementation point of view, the programming of the map of Aplünd's distances becomes easier, because the majority of image processing libraries contains morphological operations.
Notice: by replacing the dilation and erosion by rank-filters <cit.> one can compute the map of Asplund's distances with a tolerance <cit.>.
§.§ General case: a structuring function
Using a general structuring function in the equations <ref> and <ref>, the map of Asplund's distances is expressed as:
As_B^f = ln( _B f /μ_B f ) =
ln( ∨{f(x+h) /B(h) , h ∈ D_B }/∧{f(x+h) /B(h) , h ∈ D_B }) , with f > 0.
Let us study the properties of mappings _B, μ_B∈_2^_1, ∀ f, g ∈
[ _B ( f ∨ g ) = ∨{f ∨ g (x+h) /B(h) , h ∈ D_B }; = ∨{f(x+h) ∧g(x+h) /B(h) , h ∈ D_B }, because f is decreasing; = ∨{f(x+h)/B(h) ∨g(x+h) /B(h) , h ∈ D_B }, because B(h) <0; = [ ∨{f(x+h) /B(h) , h ∈ D_B }] ∨[ ∨{g(x+h) /B(h) , h ∈ D_B }]; = _B ( f ) ∨_B ( g ). ]
According to definition <ref>, 2 (p. pre:def_morpho_base), _B is a dilation.
In addition,
_B(O) = _B(f_0) = ∧{α(x), α(x) ≥0(x+h) /B(h) , h ∈ D_B } = 0 = O.
Similarly, we have:
[ μ_B ( f ∧ g ) = ∧{f ∧ g (x+h) /B(h) , h ∈ D_B }; = ∧{f(x+h) ∨g(x+h) /B(h) , h ∈ D_B }, because f is decreasing; = [ ∧{f(x+h) /B(h) , h ∈ D_B }] ∧[ ∧{g(x+h) /B(h) , h ∈ D_B }], because B(h) <0; = μ_B ( f ) ∧μ_B ( g ). ]
According to definition <ref>, 1 (p. pre:def_morpho_base), μ_B is an erosion.
In addition,
μ_B(I) = μ_B(f_M) = ∨{β(x), β(x) ≤M(x+h) /B(h) , h ∈ D_B } = + ∞ = I.
Therefore, the map of Asplund's distances is the logarithm of the ratio between a dilation and an erosion of the function f by the structuring function B. The map of the least upper bounds _B is a dilation and the map of the greatest lower bounds μ_B is an erosion. The two maps are defined from the lattice (_1 = , ≤) and the lattice (_2 = (^+ )^D,≤) with their respective natural orders.
§ ILLUSTRATION
In figure <ref> (a), we extract a tile (i.e. the probe or the structuring function) in an image f and we look for the similar ones in a darken image, f^d, by means of a LIP multiplication of 0.3. Physically, it corresponds to an object with a stronger light absorption. Importantly, the probe has a non convex domain shape and is not flat. We compute the map of Asplund's distances between the probe B and the image f^d with a tolerance, As_B,p^ f^d, as introduced in <cit.>. This metric, robust to noise, is computed by discarding p = 30 % of the points which are the closest to the least upper bounds and to the greatest lower bounds. The tiles are located at the local minima of the distance map which are extracted by a threshold of 0.7 (fig. <ref> (b)). The tiles similar to the probe, according to the Asplund's distance, have been correctly detected (fig. <ref> (c)). Notice that the domain of the probe is slightly smaller than the domain of the tiles.
§ CONCLUSION
In the current paper, we have shown that the map of Asplund's distances between a probe and a function using the LIP multiplication is linked with morphological operations. The probe corresponds to a structuring function and the map of Asplund's distances is the logarithm of the ratio between a dilation and an erosion of the function by the structuring function into the lattice of positive functions ((^+ )^D,≤). The dilation is the map of the least upper bounds _B f, between the function f and the probe B, while the erosion is the map of the greatest lower bounds μ_B f.
The dilation and the erosion are mappings between the complete lattices of the images ( , ≤) and the lattice ((^+ )^D,≤) with the natural order. When using a flat structuring element, the expression of the map of Asplund's distances can be simplified with a dilation and an erosion of the image into the same lattice of the images ( , ≤). An example of pattern matching has been presented with a non-flat structuring function.
The obtained results set the pattern matching approach by Asplund's distances in the well established framework of Mathematical Morphology. The current reasoning can be extended to the double-sided probing by Asplund's distances for colour and multivariate images using the LIP multiplicative or the LIP additive framework <cit.>. This will be presented in a coming paper.
§ APPENDIX
Le us demonstrate that the Aplünd's metric d_As^ is a metric in the space of equivalence classes ^. In order to be a metric on (×) →^+, d_As^ must satisfy the four following properties:
* (positivity):
∀≠∈, ∀ x ∈ D, (x) > μ(x) (def. <ref>, p. def:asplund_metric), because >0
⇒ > μ because is an ordered set with the order ≤
⇒∀≠∈, d_As^(,)>0.
* (Axiom of separation):
[ .
[ d_As^(,) = 0 ⇒ = μ; (def. <ref>) ⇒≥≥μ ]} ⇒ = ⇒ = in; ]
Reciprocally:
[ .
[ ∀, ∈, = ⇒ =; (def. <ref>) ≥≥μ ]} ⇒ = = μ; ; ⇒ = μ⇒ d_As^(,) = 0 ]
Eq. <ref> and <ref> ⇒ {∀, ∈., . d_As^(,) = 0 ⇔ = }.
* (Triangle inequality):
Let us define: d_As^(,) = ln(_1/μ_1), d_As^(,) = ln(_2/μ_2) and d_As^(,) = ln(_3/μ_3).
We have
d_As^(,) + d_As^(,) = ln( _1 _2/μ_1 μ_2)
[ Def. <ref> ⇒ {[ _1 = inf{k_1: ∀ x, k_1 (x) ≥(x)}; _2 = inf{k_2: ∀ x, k_2 _j(x) ≥(x)}; _3 = inf{k_3: ∀ x, k_3 _j(x) ≥(x)}; ].; ]
[ ⇒ _1 _2 ≤ inf_k_1{inf_k_2{∀ x, k_1 ( k_2 _j(x)) ≥ k_1 (x) }≥(x) }; ≤ inf{ k': ∀ x, k' _j(x) ≥(x)}, with k' = k_1 × k_2; ⇒ _1 _2 ≤ _3 , with _1, _2, _3 > 0; ]
In the same way:
μ_1 μ_2 ≥μ_3 , with μ_1, μ_2, μ_3 > 0
Eq. <ref>, <ref> <ref> ⇒ _1 _2/μ_1 μ_2≥_3/μ_3
⇒∀, , ∈, d_As^(,) ≤ d_As^(,) + d_As^(,).
* (Axiom of symmetry):
Let us define: d_As^(,) = ln(_1/μ_1), d_As^(,) = ln(_2/μ_2).
Def. <ref> ⇒
_1 = inf{ k: ∀ x, ≥1/k}, because k>0
⇒1/_1 = sup{ k': ∀ x, ≥ k' }
⇒1/_1 = μ_2.
In the same way, we have 1/μ_1 = _2.
Therefore, ∀, ∈, d_As^(,) = ln(_1/μ_1) = ln(_2/μ_2) = d_As^(,).
splncs03
|
http://arxiv.org/abs/1701.07753v1 | 20170126161059 | Black hole accretion in gamma ray bursts | [
"Agnieszka Janiuk",
"Michal Bejger",
"Petra Sukova",
"Szymon Charzynski"
] | astro-ph.HE | [
"astro-ph.HE"
] |
§ INTRODUCTION
Gamma-ray bursts (GRBs) are energetic, transient events observed
in the sky at high energies. Their known cosmological origin
requires a physical process that causes them to be a cosmic explosion of
great power. Proposed mechanisms involve the creation of a black hole (BH) in a
cataclysmic event. This may either result from the collapse of a massive
rotating star, or via the merger of compact objects' binary, e.g. neutron stars (NSs) or a BH and a neutron star. The so-called `central engine' of this
process is composed of a hot and dense accretion disk with a hyper-Eddington
accretion rate (up to a few M_⊙ s^-1) near a spinning BH.
In the hyperaccreting disks that are present around the BHs in the central engine, the densities and temperatures are so high, that the equation of state (EOS) can no longer be assumed to be of an ideal gas.
The pressure and chemical balance is required by the nuclear reactions that
take part between the free protons, neutrons, and electron-positron pairs abundant in the plasma. The charged particles must satisfy the total
neutrality condition.
In the β reactions, neutrinos are produced in three flavors and are subject to absorption and scattering processes. The partially trapped neutrinos
contribute to the pressure in the plasma, together with nucleons, electron-positron pairs, radiation, and synthesized heavier particles (Helium).
Numerical computations of the structure and evolution of the
accretion flows in the gamma ray bursts engine were historically first carried out
with the use of steady-state, and later with the time-dependent models.
These models were axially and vertically averaged, and used the classical α-parameter prescription for the viscosity
(Popham et al. 1999; Di Matteo et al. 2002; Kohri et al. 2002, 2005; Chen & Beloborodov 2007; Reynoso et al. 2006; Janiuk et al. 2004; 2007; 2010).
More recently, accretion flows are described by fully relativistic
MHD computations (e.g., Barkov & Komissarov 2011; Janiuk et al. 2013).
This method has also been applied recently to describe
the central engine of a putative gamma ray burst which could be associated with
the event GW150914 (Janiuk et al. 2017). Here we report on this work,
and in addition, we also present new results on the nucleosynthesis of
heavy elements in the accretion flow in this engine.
§ ACCRETION FLOW IN THE GRB ENGINE
§.§ Equation of state
In the EOS, contribution to the pressure comes from free nuclei and e^+-e^- pairs, helium, radiation and trapped
neutrinos:
P = P_ nucl+P_ He+P_ rad+P_ν,
where P_ nucl includes free neutrons, protons,
and the electron-positrons:
P_ nucl=P_ e-+P_ e++P_ n+P_ p
with
P_ i = 2 √(2) 3π^2(m_ic^2)^4 (ħ c)^3β_i^5/2[F_3/2(η_ i,β_ i)+1 2β_ iF_5/2(η_ i,β_ i)].
Here, F_ k are the Fermi-Dirac integrals of the order k, and
η_ e, η_ p and η_ n are the reduced chemical
potentials, η_i = μ_i/kT
is the degeneracy
parameter, μ_i denoting the standard chemical
potential. Reduced chemical potential of positrons is
η_ e+=-η_ e-2/β_ e.
Relativity parameters are defined as β_ i=kT/m_ ic^2.
The term P_ rad, describing the pressure of radiation,
scales with the temperature as T^4, and is added here together with the
pressure of neutrinos, P_ν, which is computed from the two-stream approximation (see details in Janiuk et al. 2007). The disk is fully opaque to photons, however the radiation pressure is still by several orders of magnitude smaller than the pressure of neutrinos.
This EOS is computed numerically by solving the balance of nuclear reactions (Yuan 2005; Janiuk et al. 2007; Janiuk & Yuan 2010; see also Lattimer & Swesty 1991; Setiawan et al. 2004).
§.§ Neutrino cooling of the accretion flow in GRBs
In the hot and dense torus, with temperature of ∼10^11 K
and density > 10^10 g cm^-3, neutrinos are efficiently produced.
The main reactions that lead to their emission are the electron/positron capture on nucleons,
as well as the neutron decay. Their nuclear equilibrium is described by the following
reactions:
p + e^-→ n + ν_ e
p + ν̅_ e→ n + e^+
p + e^- + ν̅_e→ n
n + e^+→ p + ν̅_ e
n → p + e^- + ν̅_ e
n + ν_ e→ p + e^-,
and the forward and backward reaction rates are equal.
The reaction rates are given by the appropriate integrals
(Reddy, Prakash & Lattimer 1998; see also Appendix A in Janiuk et al. 2007).
Other neutrino emission processes are: electron-positron pair annihilation, bremsstrahlung and plasmon decay:
e^-+e^+→ν_ i+ν̅_ i
n+n → n+n+ν_ i+ν̅_ i
γ̃→ν_ e+ν̅_ e .
We calculate their rates numerically, with proper integrals over the distribution function of relativistic, partially degenerate species.
The neutrino cooling rate is finally given by the two-stream approximation, and includes
the scattering and absorptive optical depths for neutrinos of all three
flavors (Di Matteo et al. 2002):
Q^-_ν = 7 8σ T^43 4∑_i=e,μ,τ 1 τ_ a, ν_i + τ_ s 2
+ 1 √(3) +
1 3τ_ a, ν_i×1 H [ erg s^-1 cm^-3] .
The neutrino cooling is self-consistenlty computed from the balance of the nuclear reactions, listed in Section 2.2, whose rates govern the values of
the absorptive optical depths for electron (all reactions) and muon or tau neutrinos (pair annihilation and bremsstrahlung reactions). Also, the neutrino scattering optical depth is computed, using the values of proton and neutron number densities, and Cabbibo angle of sin^2θ_ C=0.23 (see Yuan 2005, Janiuk et al. 2007). The emissivity is averaged over the disk height, instead of the integration over the neutrinosphere, whose shape is rather complex.
§.§ GR MHD scheme
Our simulations of the accretion flow dynamics in the GRB central engine
were performed with a 2D code HARM
(High Accuracy Relativistic Magnetohydrodynamics; Gammie et al. 2003).
Our version of the code was extended to include the microphysics, described in the previous Section.
The code provides a solver for the continuity and energy-momentum conservation equations:
(ρ u^μ)_;μ = 0
T^μ_ν;μ = 0
P = Kρ^γ = (γ-1) u
where:
T^μν = T^μν_gaz + T^μν_EM
T^μν_gaz = ρ h u^μ u^ν + pg^μν =(ρ + u + p) u^μ u^ν + pg^μν
T^μν_EM = b^2 u^μ u^ν + 1/2b^2 g^μν - b^μ b^ν ; b^μ = u_ν^μν
where u^μ is the four-velocity of gas, u is the internal energy density,
b^μ = 1/2ϵ^μνρσu_ν F_ρσ is the magnetic four-vector,
and F is the electromagnetic stress tensor. The metric tensor is denoted by g^μν.
Within the force-free approximation, we have E_ν=u_μF^μν=0.
HARM-2D uses a conservative numerical scheme to obtain solutions of equations
of the following type:
_t () = -_i ^i() + 𝐒(),
where denotes a vector of “conserved” variables (momentum, energy density,
number density, taken in the coordinate frame), is a vector of
'primitive' variables (rest mass density, internal energy),
^i are the fluxes, and 𝐒 is a vector of source terms.
In contrast to non-relativistic MHD, where → and → have a closed-form solution, in relativistic MHD,
() is a complicated, nonlinear relation. Inversion
() is calculated therefore numerically, at every time step.
The inverse transformation requires a solution to 5 non-linear equations,
done here by means of a multi-dimensional Newton-Raphson routine
(see, e.g., Noble et al. 2006 for details).
The procedure is simple, if the pressure-density relation is adiabatic.
However, for a general EOS, one must also compute dp/dw, dp/dv^2
and p(W, v^2,D), where w=p+u+ρ denotes the enthalpy, W=γ^2 w, D=√(-g)ρ u^t, and v^2=g^μνu^μu^ν.
In our scheme, we have tabulated (p,u)(ρ,T) values and then interpolate over this table.
The Jacobian ∂ /∂ is computed numerically. The conserved variables are then evolved in time.
HARM-2D has been MPI-parallelized for the hydro-evolution, and also implemented with the shared memory hyperthreading for the EOS-table interpolation.
Our 2D simulations typically run up to t=2000-3000 M (within a couple weeks of computations on the local cluster, or a few days on Cray XC40 with 1k nodes).
The computations are performed in the polar coordinate system, with
resolution of 256 x 256 points in the r and θ directions.
§.§ Example simulation results
Parameters used in our computations are: BH mass, with a fiducial value of M=10 M_⊙, BH spin a=0.6-0.98, and accretion disk mass M_d≈ 1 M_⊙, which gives the scaling for the density in physical units,
needed for the EOS computations.
To describe the rotationally-supported torus around a spinning BH
we use the initial condition given by the equilibrium solution, first developed analytically by Fishbone & Moncrief (1976) and Abramowicz et al. (1978).
We adopt the polar coordinate system,
r-θ, and use the Kerr-Schild coordinates to avoid the coordinate singularity on the BH horizon. Initially, a poloidal configuration of the field is assumed, with the vector potential A_ϕ= (ρ/ρ_max), and initial normalization of
P_gas/P_mag=50. Magnetic turbulences develop within the dense matter torus
and the field is advected with the infalling gas under the BH horizon.
For a rapidly-spinning BH, the open magnetic field lines form along the rotation axis, and the magnetically driven jets can be produced. Energy extracted from the rotating BH through the Blandford-Znajek process (Blandford & Znajek 1977)
gives then a substantial contribution to the jet luminosity. In addition, annihilation of neutrino-antineutrino pairs produced in the torus is yet another source of power to the jets. In Figure 1, we show the result of an exemplary simulation, which shows a snapshot taken from evolved run, with the distribution of magnetic field lines, magnetic and gas pressure ratio, and neutrino emissivity, in the region close to a black hole.
§ NUCLEOSYNTHESIS OF HEAVY ELEMENTS IN THE GRB ENGINE
In the astrophysical plasma, thermonuclear fusion occurs
due to the capture and release of particles (n, p, α, γ).
Reaction sequence produces further isotopes.
The nuclear reactions may proceed with one (decays, electron-positron capture, photodissociacion), two (encounters), or three (triple alpha reactions) nuclei.
Therefore, the change of abundance of the i-th isotope is in general given by:
Ẏ_ i = ∑_j N_j^iλ_jY_j +
∑_j,k N_j,k^iρ N_ A Y_jY_k +
∑_j,k,l N_j,k,l^iρ^2 N^2_ AY_jY_kY_l
Abundances of the isotopes are calculated under the assumption of nucleon number and charge conservation for a given density, temperature and electron fraction (T ≤ 1 MeV). Integrated cross-sections depending on temperature kT are determined with the Maxwell-Boltzmann or Planck statistics, and the background screening and degeneracy of nucleons must be taken into account.
The set of resulting non-linear differential equations is solved using the Euler method (Wallerstein et al. 1997).
In our computations, we used the thermonuclear reaction network code, http://webnucleo.org, and we computed the nuclear statistical equilibria established for the fusion reactions, for which the data were taken from the JINA reaclib online database (Hix & Meyer 2006).
This network is appropriate for temperatures below 1 MeV, which is the case at the outer radii of accretion disks in GRB engines.
The mass fraction of the isotopes is solved for converged profiles of density,
temperature and electron fraction in the disk.
We find that the free nuclei are present mostly below 100-200 gravitational radii, and the plasma there is rather neutron rich. Then, most abundant heavy elements formed in the accretion tori in GRBs are Nickel, Iron and Cobalt, as well as Argonium, Titatnium, Cuprum, Zinc, Silicon,
Sulphur, Clorium, Manganium, Titatnium, and Vanadium.
As a consequence of the heavy element formation, the enhanced emission in the lower energy bands, i.e., ultraviolet or optical, due to the decay of species, may accompany the GRB (the 'macronova'; e.g., Li & Paczynski 1998).
Also, the radio flares, occurring months to years after the GRB can be observed (Piran et al. 2012).
Furthermore, certain isotopes decay should be detectable via emission lines.
For the NuSTAR satellite, sensitive in the 5-80 keV range,
it's in principle possible to detect the photons from Ti, Co, Mn, Cu, Zn, Ga, Cr unstable isotopes decays.
Additionally, the EPIC detector onboard the XMM-Newton,
could be able to see lines below 15 keV, e.g., ^45Ti, ^57Mn,
or ^57Co (Janiuk 2014).
In Figure 3, we plot an example result of the nucleosynthesis computations,
showing the volume integrated abundance distribution of elements, in the function of their mass number. The input was taken from the simulation of the GRB central engine, as designed to model the putative GRB that could possibly be associated with GW 150914 (see next Section).
§ GRAVITATIONAL WAVE SOURCE GW150914
Data from Fermi Gamma-ray Burst Monitor (GBM) satellite suggested that the recently registered gravitational wave event GW150914 (Abbott et al. 2016), a coalescing binary BH system, is potentially related to a GRB. The electromagnetic radiation in high energy (above 50 keV) originated from a weak transient source and lasted for about 1 second (Connaughton et al. 2016). Its localization is broadly consistent with the sky location of GW150914.
We speculate here on the possible scenario for the formation of a GRB accompanied by the gravitational wave (GW) event. Even though the
presence of the GRB was recently questioned by other instruments measurements (e.g., Greiner et al. 2016), we envisage the possibility of a GRB coincident with the GW event from the BH merger to be worth exploring, in the context of the future observations.
Our model invokes coalescence of a collapsing star's nucleus that forms a BH, with its companion BH in a binary system (Janiuk et al. 2013, 2017).
We find that the recoil velocity acquired by the final BH through the GW emission allows it to take only a small fraction of matter from the host star, provided specific configuration of the binary spin vectors with the orbital angular momentum. The GRB is produced on the cost of accretion of this remnant matter onto the final BH. The moderate spin of this BH accounts for the GRB jet to be powered by a weak neutrino emission rather than the Blandford-Znajek mechanism, and hence agrees with the low power observed in the Fermi GRB signal.
The semi-analytical treatment of the BH core collapse and the star's
envelope spin-up (see Janiuk et al. 2008, for details) is followed by the general-relativistic simulations of the binary BH merger. Then, the GRB central engine evolution is carried numerically, to find the energy output from the magnetized, neutrino-cooled torus that powers the electromagnetic burst.
§.§ Merger of binary BHs
The GW150914 event was interpreted to be a merger of two BHs of the
masses of 36^+5_-4 M_⊙ and 29^+4_-4 (Abbott et al. 2016). The final BH parameters were estimated
to be of 62^+4_-4 M_⊙ and 0.67^+0.05_-0.07 for its mass and
spin. Probabilities that the angles between spins and the normal to the orbital
plane are between 45^∘ and 135^∘ are about 0.8 for each component
BH. Spin magnitudes are constrained to be smaller than 0.7 and 0.8 at 90%
probability. Assumption of a strict co-alignment of spins with the
orbital angular momentum results in an upper limit of 0.2 and 0.3 for the
spins. Distance to the source was that of
410^+160_-180 Mpc, corresponding to a redshift of about z=0.09 (assuming
standard cosmology).
We made several runs for BH mass and range of spins, constrained from the Advanced LIGO data. In Figure 3, we plot the example results from the run with BH mass ratio of 0.82, and aligned spins of 0.2 and 0.3. The waveform is plotted using the Weyl scalar ψ_4 . The final BH spin obtained in this simulation was 0.68.
Curvature of spacetime is described by curvature tensor - the Riemman tensor (20 independent components), which can be decomposed into sum of Ricci tensor (10 independent components) and Weyl tensor (10 independent components).
For vacuum solutions of Einstein equations Ricci tensor vanishes, so curvature is described by Weyl tensor only.
In the Newman-Penrose formalism components of Weyl tensor are encoded as five complex Weyl scalars Ψ_0, ... , Ψ_4. These are different components of curvature tensor, which have different physical interpretation.
Ψ_4 describes the outgoing radiation for an asymptotically flat spacetime.
These runs were performed with the Einstein Toolkit computational package
(Loeffler et al 2012).
The numerical methods used here are based on finite difference computations on a
gridded mesh and follow the inspiral, merger and ringdown phases of the binary black hole evolution, using the technique of the adaptive mesh
refinement. The Cartesian grid covers the volume of 48×48×48 M. In AMR we use 7 levels of refinement, and the
resolution ranges from Δ
x=2.0M for the coarsest grid to Δ x=0.03125M for the finest grid.
From the simulation runs, we constrained not only the spin of the merger black hole (it is then used to be a parameter in the GRB engine model), but also the
velocity of the gravitational recoil. This turned out to be on the order of ∼700 km/s. Following Kocsis et al. (2012), we argue that it is likely that accretion of the accumulated matter onto the final BH is triggered while it moves towards the circumbinary disk after the merger.
§.§ Weak GRB powered by neutrino emission
The Fermi GRB coincident with the GW150914 event had a duration of about 1 sec and appeared about 0.4
seconds after the GW signal. Within the limit of uncertainty of the Advanced LIGO and Fermi detectors' capabilities it could also be associated spatially.
The GRB fluence in the range 1 keV-10
MeV, is of 2.8 × 10^-7 erg cm^-2.
The implied source luminosity in gamma rays equals to 1.8^+1.5_-1.0× 10^49 erg/s.
We modeled the GRB engine, constrained by the post-merger conditions. Using the code HARM-2D, with the numerical EOS and neutrino cooling implemented into MHD evolution, we estimated the neutrino and Blandford-Znajek luminosities available to power the GRB jet in this source.
In particular, the mass of the accreting torus, created from the circumbinary matter, was tuned to produce adequately low neutrino luminosity for a weak GRB.
Example parameters for the BH mass of M=62 M_⊙, the BH spin of a=0.7, and disk mass M_d=15 M_⊙ are consistent wit the weak GRB luminosity.
Assuming low efficiencies of conversion between
neutrino annihilation, and weak conversion between the
jet kinetic and radiative power, this scenario is able to meet quantitatively
the limits put by the Fermi data.
The disk mass is here just an order of magnitude estimate. We first checked, that a much larger mass, on the order of the mass of the merged black hole, is too large to be consistent with the observed limitations for luminosity. On the other hand, the supernova explosion which might have left a 30 Solar mass black hole, should have originated from at least 80 Solar mass star on zero-age main sequence, in a low-metallicity environment (Abbott et al. 2016b, Spera et al. 2015). Even if a significant amount of the star's mass was ejected during the evolution and explosion, some remnant of this order is plausible.
The luminosity L_ν, emitted in neutrinos is simply equal to their volume integrated emissivity.
To compute the energy flux through the horizon of the BH and the Blandford-Znajek luminosity, L_ BZ, we need
the electromagnetic part of the stress tensor,
T^μν_ EM =
b^2 u^μ u^ν + b^22 g^μν - b^μ b^ν,
where the four-vector b^μ with b^t ≡
g_iμ B^i u^μ and b^i ≡ (B^i + u^i b^t)/u^t.
We then evaluate the radial energy flux as the power of the Blandford-Znajek process:
L_ BZ≡Ė = 2π∫_0^π dθ √(-g)F_E
where F_E ≡ -T^r_t. This can be subdivided into a matter F^(MA)_E
and electromagnetic F^(EM)_E
part, although in the
force-free limit the matter part vanishes (McKinney & Gammie 2004).
In Figure 4, left panel, we plot the values of L_ BZ as it varies with time for various simulations, depending on the black hole spin. In the right panel, we show the corresponding evolution of the neutrino luminosity.
The maximum of the neutrino luminosity
obtained in our simulations is reached about 0.4 seconds after an equilibrium
torus, prescribed by our initial conditions, had formed. This may tentatively
give the lower limit for the
timescale when the jet appears after the BH merger. The Blandford-Znajek luminosity is on the order of 10^50-10^51 ergs/s, and only occasionally non-zero for low spins (the jet is powered by magnetic field only episodically).
We find therefore, that the GRB luminosity inferred from our simulations
can be reconciled with the observational
upper limits, for moderate spins of the final BH (a=0.6-0.8).
The connection of our computation to the particular event GW150914 and the corresponding Fermi GRB is arguable because of several facts. At first, the flare produced by our simulation is longer than the reported very short duration of the GRB (1s). However, the observed burst was close to the sensitivity limit of the instrument, so it is likely that it could observe just the very top of the flare and left the rest of the burst undetected. Moreover, if our line of sight is offset with respect to the jet axis and we can see only the edge of the jet, the detailed shape of the flare would be affected by the behaviour of the propagating jet. The time delay between the burst and gravitational signal can be significantly longer, if there is a massive star remnant, through which the jet has to propagate. The state of the remnant matter after the second BH collapse is very uncertain and needs to be studied in details in future works. However, we can speculate that the collapse and subsequent orbiting of the two BHs is violent enough to destroy the star, expel part of the matter away and form a dead circumbinary disc of the remnant matter, in which the accretion has stopped, around the BH binary. Then after the merger the new born black hole can receive a substantial kick towards the circumbinary disc (see our BH merger simulations in Section <ref>), can meet the matter in fraction of second and produce the burst. In that case, there is only little material left along the axis, through which the jet has to propagate, and so the time delay can be quite short, on the order of a second. However, this speculative scenario needs both more theoretical modeling and mainly more future observations of coincident gravitational and electromagnetic signals, which are well above the instrumental limits.
§ CONCLUSIONS
* We have carried out numerical GR MHD simulations of the magnetized torus accreting onto BH in the GRB central engine. The EOS is no longer assumed to be of an ideal gas, but accounts for the proper microphysics and neutrino cooling of the GRB engine.
* The neutrino emission and absorption processes account for an additional pressure component in the plasma, and the species are partially degenerate. The resulting density and temperature, as well as the electron fraction, are then taken into account when determining the nuclear equilibrium conditions.
* We determine the abundances of heavy elements (above Helium, up to the mass number of ∼ 100), which are synthesized in the accretion disk in GRB engine, and in the winds that are magnetically launched from its surface. The observational signatures of radioactive decay of some of the unstable isotopes can be, for instance, the emission lines seen in the X-rays (in principle, in the NuSTAR range),
and also in the faint emission in lower energy band continuum (i.e. 'kilonova' or 'macronova' scenario).
* We compute and compare the efficiencies of the GRB jet powered through the neutrino annihilation, and the Blandfrod-Znajek mechanism. We find that the constraints of a moderate BH spin and a small disk mass to the BH mass ratio, are in tentative agreement with the Advanced LIGO and Fermi data, if the latter was really coincident with the GW event.
* A definite answer and a test for our scenario will be possible if further searches for the GW events will provide more data, also for their electromagnetic counterparts, in both gamma rays and lower energies.
We claim, that even though the observations are still subject to debate, the possible GRB-BBH merger coincidence is worth to investigate. The timescales
obtained in our scenario are possibly more plausible for a lonng gamma ray burst scenario than for a short one (see also Perna et al. 2016; Loeb 2016), unless
the Fermi burst itself was much longer, but below the detction limit. On the other hand, the model proposed by Zhang (2016) can satisfy the duration and delay timsecales, but the intepretaion of a charged black hole and its observational signatures also needs further investigations.
This work was supported in part by the grants no. DEC-2012/05/E/ST9/03914 and
UMO-2014/14/M/ST9/00707 from the Polish National Science Center.
We also acknowledge support from the Interdisciplinary Center for Mathematical Modeling of the Warsaw University, through the computational grant G53-5.
mdpi
999
ref-journal Abbott, B.P., et al. Observation of Gravitational Waves from a Binary Black Hole Merger. Phys. Rev. Lett. 2016a, 116, 061102
ref-journal Abbott, B.P., et al. Astrophysical Implications of the Binary Black-hole Merger GW150914. ApJL 2016b, 818, L22
ref-journal Abramowicz, M.; Jaroszynski, M.; Sikora, M. Relativistic, accreting disks. A&A 1978, 63, 221
ref-journal Barkov, M.V.; Baushev, A.N. Accretion of a massive magnetized torus on a rotating black hole. New Astr. 2011, 16, 46
ref-journal Barkov, M.V., Komissarov, S.S. Close binary progenitors of gamma-ray bursts. MNRAS 2010, 401, 1644
ref-journal Blandford, R.D., Znajek, R.L. Electromagnetic extraction of energy from Kerr black holes. MNRAS 1977 179, 433
ref-journal Chen, W.X.; Beloborodov, A.M. Neutrino-cooled Accretion Disks around Spinning Black Holes. ApJ 2007, 657, 383
ref-journal Connaughton, V., et al. Fermi GBM Observations of LIGO Gravitational-wave Event GW150914. ApJ Lett. 2016, 826, 6
ref-journal Di Matteo, T.; Perna, R.; Narayan, R.. Neutrino Trapping and Accretion Models for Gamma-Ray Bursts. ApJ 2002, 579, 706
ref-journal Fishbone, L. G.; Moncrief, V. Relativistic fluid disks in orbit around Kerr black holes. ApJ 1976, 207, 962
ref-journal Gammie, C. F.; McKinney, J.C.; Tóth, G. HARM: A Numerical Scheme for General Relativistic Magnetohydrodynamics. ApJ 2003, 589, 444
ref-journal Greiner, J; Burgess, J. M.; Savchenko, V.; Yu, H.-F. On the Fermi-GBM Event 0.4 s after GW150914. ApJ Lett. 2016, 827, 38
ref-journal Hix, W.R., Meyer, B.S., Thermonuclear kinetics in astrophysics. Nucl Phys., 2006 777, 188
ref-journal Janiuk, A.; Bejger, M.; Charzynski, S.; Sukova, P. On the possible gamma-ray burst-gravitational wave association in GW150914. New Astr. 2017, 51, 7
ref-journal Janiuk, A. Nucleosynthesis of heavy elements in gamma ray burst engines. A&A 2014, 568, 105
ref-journal Janiuk, A.; Mioduszewski, P.; Moscibrodzka, M. Accretion and Outflow from a Magnetized, Neutrino Cooled Torus around the Gamma-Ray Burst Central Engine. ApJ 2013, 776, 105
ref-journal Janiuk, A.; Charzynski, S.; Bejger, M. Long gamma ray bursts from binary black holes. A&A 2013, 560, 25
ref-journal Janiuk, A.; Proga, D.; Moderski, R. On the Duration of Long GRBs: Effects of Black Hole Spin. ApJ 2008, 687, 433
ref-journal Janiuk, A.; Yuan, Y.F. The role of black hole spin and magnetic field threading the unstable neutrino disk in gamma ray bursts. A&A, 2010, 509, 55
ref-journal Janiuk, A.; Yuan, Y.F., Perna, R.; Di Matteo, T. Instabilities in the Time-Dependent Neutrino Disk in Gamma-Ray Bursts. ApJ 2007, 664, 1011
ref-journal Janiuk, A.; Perna, R., Di Matteo, T.; Czerny, B. Evolution of a neutrino-cooled disc in gamma-ray bursts. MNRAS, 2004, 355, 950
ref-journal Kocsis, B.; Haiman, Z.; Loeb, A. Gas pile-up, gap overflow and Type 1.5 migration in circumbinary discs: general theory. MNRAS 2012, 427, 2660
ref-journal Kohri, K.; Mineshige, S. Can Neutrino-cooled Accretion Disks Be an Origin of Gamma-Ray Bursts? ApJ 2002, 577, 311
ref-journal Kohri, K.; Narayan, R.; Piran, T. Neutrino-dominated Accretion and Supernovae. ApJ 2005, 629, 341
ref-journal Löffler, F.; et al. The Einstein Toolkit: a community computational infrastructure for relativistic astrophysics. Class. Quantum Grav. 2012, 29, 115001
ref-journal Li, L.X., Paczynski, B.P. Transient Events from Neutron Star Mergers, ApJ 1998, 507, 59
ref-journal Loeb, A. Electromagnetic Counterparts to Black Hole Mergers Detected by LIGO. ApJL 2016, 819, 21
ref-journal Mc Kinney, J.C.; Gammie, C.F. A Measurement of the Electromagnetic Luminosity of a Kerr Black Hole. ApJ 2004, 611, 977
ref-journal Noble, S.C.; Gammie, C.F.; McKinney, J.C.; Del Zanna, L.
Primitive Variable Solvers for Conservative General Relativistic Magnetohydrodynamics. ApJ 2006, 641, 626
ref-journal Perna, R.; Lazzati, D.; Giacomazzo, B.; Short Gamma-Ray Bursts from the Merger of Two Black Holes. ApJL 2016, 821, 18
ref-journal Popham, R.; Woosley S.E.; Fryer, C. Hyperaccreting Black Holes and Gamma-Ray Bursts. ApJ 1999, 518, 356
ref-journal Reynoso, M. M.; Romero, G. E.; Sampayo, O. A. Precession of neutrino-cooled accretion disks in gamma-ray burst engines. A& A 2006, 454, 11
ref-journal Spera, M.; Mapelli, M.; & Bressan, A. The mass spectrum of compact remnants from the PARSEC stellar evolution tracks. MNRAS 2015, 451, 4086
ref-journal Yuan, Y.-F. Electron-positron capture rates and a steady state equilibrium condition for an electron-positron plasma with nucleons. Phys. Rev. D 2005, 72, 013007
ref-journal Wallerstein, G. et al. Synthesis of the elements in stars: forty years of progress. Rev. Mod. Phys. 1997, 69, 995
ref-journal Zhang; B. Mergers of Charged Black Holes: Gravitational-wave Events, Short Gamma-Ray Bursts, and Fast Radio Bursts. ApJL 2016, 827, 31
|
http://arxiv.org/abs/1701.07866v1 | 20170126201746 | Manifestations of minimum-bias dijets in high-energy nuclear collisions | [
"Thomas A. Trainor"
] | hep-ph | [
"hep-ph"
] |
suffix=
åßp_t⟨ p_t ⟩p̅_t2̌v_2y_tm_tη_Δϕ_ΔΔρ / √(ρ_ref)y_t × y_tp_t × p_terfdE_t/dηE_t⟨ E_t ⟩⟨ e_t ⟩p-pn_chVersion 1.2CENPA 354290, University of Washington, Seattle, Washington 98195
Dijets observed near midrapidity in high-energy nuclear collisions result from large-angle scattering of low-x partons (gluons) within projectile hadrons as a signature manifestation of QCD. Within the same collisions it has been claimed that hydrodynamic flows (radial, elliptic and “higher harmonic” flows) carried by a dense QCD medium or quark-gluon plasma (QGP) dominate the observed hadronic final state. The flow-QGP narrative is imposed a priori on primary particle data, and of all possible analysis methods a subset A that seems to support that narrative is preferred. The present study explores an alternative minimum-bias (MB) jet narrative – quantitative correspondence of MB dijet manifestations in the hadronic final state with measured isolated jet properties. The latter incorporates a different set of methods B that emerge from inductive study of primary particle data without a priori assumptions. The resulting system of methods and data manifestations is represented by a two-component (soft + hard) model (TCM) of hadron production. A survey of methods reveals that type A tends to discard substantial information carried by primary particle data whereas type B retains almost all information in both primary particle data from nuclear collisions and from isolated jets. The main goal of the present study is a review of MB dijet contributions to high-energy collisions in small and large systems relative to measured isolated-jet properties. Representative analysis methods from types A and B are compared in the context of MB jet manifestations. This study suggests that at least some data features commonly attributed to flows actually result from MB dijets and thereby challenges the flow-QGP narrative.
12.38.Qk, 13.87.Fh, 25.75.Ag, 25.75.Bh, 25.75.Ld, 25.75.NqManifestations of minimum-bias dijets in high-energy nuclear collisions
Thomas A. Trainor
December 30, 2023
========================================================================
§ INTRODUCTION
This study reviews manifestations of minimum-bias (MB) dijets in the hadronic final state of high-energy nuclear collisions in the context of claimed collectivity (flows) in the same systems. There is widespread belief that hydrodynamic flows play a dominant role in high-energy collisions wherein a dense medium (quark-gluon plasma or QGP) is formed that supports a collective velocity field manifested by features of hadron distributions <cit.>. However, it is possible that at least some data features attributed to flows may relate to MB dijets <cit.>. In order to clarify such ambiguities MB dijets should be understood in , and å collisions in relation to eventwise-reconstructed (isolated) jets derived independently from and collisions over a broad range of collision energies. That is the main goal of this study.
Flows and QGP are intimately related by the assumption that a dense, strongly-interacting medium developed during nucleus-nucleus (å) collisions should respond to initial-state energy- and matter-density gradients by developing a velocity field (various flows) whose consequences may be observed in the hadronic final state <cit.>. Observation of flow manifestations and a causal relation to å initial-state geometry is sought a priori and interpreted to imply that a QGP has been established <cit.>.
Flow-QGP claims then rely on assignment of certain data features to flows and may refer to others as “nonflow” without further elaboration <cit.>. The flow-QGP narrative forms the basis for other data analysis and interpretation, and MB jet contributions may be minimized by preferred analysis methods and interpretation strategies.
The flow narrative is related to data by an assumption that most hadrons emerge from “freezeout” of a locally-thermalized, flowing dense medium <cit.>. Hadron is divided into several intervals with specific physical interpretations: thermalization and flows for p_t < 2 GeV/c <cit.>, high- jet phenomena for p_t > 5 GeV/c <cit.> and an intermediate region where production mechanisms are debated <cit.>. Such assumptions provide a preferred context for analysis and interpretation of high-energy data.
For example, transverse-momentum spectra may be fitted with a monolithic model function interpreted to reflect a thermodynamic context and to measure radial flow <cit.>. Two-dimensional (2D) angular correlations are projected onto periodic 1D azimuth ϕ and represented by Fourier series wherein each Fourier term is interpreted to represent a type of transverse flow <cit.>. Charge and fluctuations are addressed with statistical measures motivated by thermodynamic assumptions including some degree of local thermalization of a bulk medium. Intensive ratios or ratios of ratios are preferred over extensive measures of collision observables such as integrated charge multiplicity or integrated P_t within some angular domain <cit.>. Jet contributions are acknowledged only within restricted intervals including a small fraction of all hadrons, and assumed jet properties are based on conjecture rather than actual jet measurements <cit.>.
In contrast, the properties of eventwise-reconstructed dijets, their fragment momentum distributions (fragmentation functions or FFs) <cit.> and jet (leading-parton) energy spectra <cit.> measured over thirty years predict quantitatively certain manifestations of MB dijets that appear in high-energy collision data <cit.>. Predictions from isolated-jet measurements [as opposed to perturbative QCD (pQCD) theory with its limitations] are inconsistent with much of the flow narrative and its presumed basis in measurement as discussed below.
In this study I examine the process of measure design in several critical areas and compare competing analysis methods. I demonstrate how measured properties of isolated jets predict certain data features from nuclear collisions corresponding to MB dijets and how analysis methods motivated by the flow narrative may lead to attribution of the same features to flows. I show how a two-component (soft + hard) model (TCM) of hadron production in high-energy collisions emerges naturally from inductive analysis of spectrum and correlation data, is not imposed a priori, and how the TCM is manifested in other contexts. A number of examples are presented. I conclude that when data features are reexamined in the context of isolated-jet measurements little substantial evidence remains to support the flow narrative.
This article is arranged as follows:
Section <ref> discusses preferred analysis methods.
Section <ref> introduces a two-component spectrum model for collisions.
Section <ref> reviews the measured properties of reconstructed jets.
Section <ref> describes MB jet contributions to spectra.
Section <ref> presents MB jet contributions to å spectra.
Section <ref> reviews MB jet contributions to two-particle charge correlations.
Section <ref> presents MB jet contributions to fluctuations and angular correlations.
Sections <ref> and <ref> present discussion and summary.
§ PREFERRED ANALYSIS METHODS
For some aspects of data analysis alternative methods may give significantly different results and support different physical interpretations. The overall interpretation of high-energy nuclear collisions then depends on a sequence of method choices. How should such choices be made to establish an overall result that best reflects reality? One possible criterion is the fraction of information carried by primary particle data that is retained by a method for hypothesis testing. MB dijet manifestations in nuclear collisions compared to measured properties of isolated jets may provide a basis for evaluation.
§.§ Primary and secondary observables
All analysis methods are based on primary particle data. A charged-particle detector (e.g. time-projection chamber) determines the primary hadron single-particle (SP) observables in high-energy nuclear collisions: transverse momentum , pseudorapidity η, azimuth angle ϕ, charge sign and possibly hadron species via particle ID, in which case η→ y_z (longitudinal rapidity) and p_t → m_t = √(p_t^2 + m_h^2) (transverse mass) with m_h a hadron mass. Transverse rapidity y_t ≡ln[(p_t + m_t)/m_h] provides superior visual access to SP spectrum structure at lower . For unidentified hadrons the pion mass may be assumed. SP densities are defined on as spectra and on (η,ϕ) as angular densities.
Two-particle (pair) densities defined on 6D momentum space (p_t1,η_1,ϕ_1,p_t2,η_2,ϕ_2) or a subspace may reveal certain pair correlations identified with physical mechanisms. 2D pair densities on (x_1,x_2) may be projected by averaging onto difference variables x_Δ = x_1 - x_2 to obtain a joint angular autocorrelation on the reduced space (p_t1,p_t2,η_Δ,ϕ_Δ) <cit.> which can be further reduced by integration over bins or the entire acceptance. In discussing 2D angular correlations it is convenient to separate azimuth difference ϕ_Δ into two intervals: same-side (SS, |ϕ_Δ| < π / 2) and away-side (AS, |ϕ_Δ - π| < π / 2).
Charge multiplicity and particle may be integrated over some angular acceptance (Δη,Δϕ) or multiple bins within an angular acceptance to obtain and P_t as extensive eventwise random variables (RVs) whose fluctuations may be of interest <cit.>. Uncorrected (observed) charge multiplicities denoted by n̂_ch are relevant to Poisson statistics, for instance in determining void probabilities defined below. then denotes corrected values.
Secondary observables may be defined as combinations of primary observables, for instance eventwise mean ⟨ p_t ⟩ = P_t / n_ch as an intensive RV <cit.>, event-ensemble-mean p̅_t = P̅_t / n̅_ch characterizing a collision system <cit.>, spectrum ratio R_AA as the ratio of a central å spectrum to a spectrum <cit.>, and v_2(p_t) also as a ratio of distinct hadron spectra <cit.>. Fluctuation and pair-correlation measures (variances and covariances) may be combined with other statistics in sums, differences or ratios to define secondary statistics. SP and pair momentum spaces may be partitioned (possibly based on a priori assumptions), for instance defining certain intervals within which specific physical mechanisms are expected to dominate collision data according to some narrative.
§.§ Competing analysis methods
Analysis methods are generally not unique. A specific combination of methods contributing to a published analysis may comprise a subset of available methods determined by a sequence of choices among alternatives, possibly guided by a preferred narrative. In a given context (e.g. SP spectra or pair angular correlations) alternative selections may lead to significantly different physical interpretations of collision data.
For instance, each of extensive measures P_t and n_ch or their ensemble means P̅_t and n̅_ch may reveal certain data trends inconsistent with a temperature hypothesis but supporting an alternative hypothesis whereas fluctuations of intensive eventwise ⟨ p_t ⟩ or systematics of ensemble p̅_t may be seen as reflecting local temperature variations of a conjectured bulk medium. Critical extensive trends may be suppressed by cancellations within intensive ratios.
Hadron pair correlations from high-energy nuclear collisions projected onto 1D azimuth exhibit strong nonuniformities (literally “azimuthal anisotropy”) that may originate from several physical mechanisms including MB dijets. However, the term “anisotropic flow” is commonly interpreted as synonymous with azimuthal anisotropy <cit.>. Any distribution on periodic azimuth, no matter what its physical origins, can be described exactly by a Fourier series (FS). The assertion “The second Fourier coefficient of the azimuthal asymmetry [i.e. anisotropy] is called elliptic flow” <cit.> reflects a common assumption. Alternative modeling of azimuth distributions may favor a different physical interpretation.
This study emphasizes manifestations of MB dijets from high-energy nuclear collisions within several contexts (e.g. yields, spectra, correlations, statistical fluctuations) and their relation to selection of specific analysis methods: How are MB dijets, consistent with measured isolated-jet properties, revealed or concealed by method choices, and what criteria would insure conscious and unbiased choices that lead to meaningful interpretations?
§ TWO-COMPONENT SPECTRUM MODEL
In Ref. <cit.> a detailed analysis of spectra was applied to ten multiplicity classes of 200 GeV collisions. No a priori assumptions about spectrum structure were imposed. The main goal was to understand systematic variation of spectrum shape with in terms of algebraic models inferred from data alone: given available spectrum data what is the most efficient algebraic description? For reasons given in Sec. <ref> this analysis is presented in terms of transverse rapidity .
§.§ Spectrum data and soft component
Figure <ref> (left) shows spectra for ten multiplicity classes normalized by soft-component multiplicity n_s (points) and displaced upward from each other by successive factors 40 relative to the lowest spectrum (the terms “soft” and “hard” are interpreted below) <cit.>. Empirically, all spectra are observed to coincide at lower if normalized by n_s ≈ n_ch - α n_ch^2 for some α≈ 0.01. The definition of n_s in terms of is refined further in Sec. <ref>.
Figure <ref> (right) shows running integrals of the ten normalized spectra in the left panel. The spectra have been extrapolated to y_t = 0 (note that spectra are nearly constant at lower ). The running integral is a means to enhance a long-wavelength signal (spectrum shape) over short-wavelength (statistical) noise. Different behavior is observed in each of intervals A, B and C (spanning the detector acceptance). In interval A the integrals approximately coincide. In interval B the integrals diverge substantially. In interval C the integrals are nearly constant, and those constant values increase approximately linearly with . The limiting case for n_s→ 0 is described by function N_0(y_t) (dash-dotted curve) defined below. Those results indicate that the main dependence lies within interval B as the running integral of a peaked spectrum component with amplitude ∝ n_ch.
The limiting case N_0(y_t) is modeled by the running integral of a unit-normal Lévy distribution on <cit.>Ŝ_0(m_t) = A(T_0,n_0)/[1 + (m_t - m_h) / n_0 T_0]^n_0,
where T_0 ≈ 145 MeV controls the function mainly in interval A and n_0 ≈ 12.8 controls the function mainly in interval C. The Jacobian factor from to is p_t m_t / y_t. The resulting Ŝ_0(y_t) model inferred directly from data trends can be subtracted to reveal the peaked spectrum (hard) component residing mainly within interval B.
§.§ Spectrum hard component
Figure <ref> (left) shows the normalized spectrum data in Fig. <ref> (left) with fixed model Ŝ_0(y_t) inferred from Eq. (<ref>) subtracted to reveal peaked distributions (points and dashed curves) centered on interval B with amplitudes increasing approximately ∝ n_ch. With a few exceptions discussed below the distributions are well described by a two-parameter Gaussian function (solid curves). That result suggests that the peaked hard componentH(y_t,n_ch) has the factorized form H(y_t,n_ch) / n_s ∝ n_chĤ_0(y_t).
Figure <ref> (right) shows data in the left panel rescaled by soft-component multiplicity n_s (rather than ) (solid) compared with a fixed Gaussian model in the form αĤ_0(y_t) with centroid y̅_t ≈ 2.7 and width σ_y_t≈ 0.45 (dashed) and with coefficient α≈ 0.006 determined by the data-model comparison. In summary, a TCM for spectra is inferred inductively from spectrum data alone as
dn_ch/y_t dy_t = S_pp(y_t,n_ch) + H_pp(y_t,n_ch)
= n_s(n_ch) Ŝ_0(y_t) + n_h(n_ch) Ĥ_0(y_t),
where and trends have been factorized separately for the two spectrum components. There are two exceptions: (a) Significant systematic deviations from Ĥ_0(y_t) are observed for the lowest classes. (b) Smaller systematic deviations are also observed for all classes near the upper limit of the acceptance. Both exceptions were reconsidered in Ref. <cit.> and incorporated into an extended TCM describing spectrum data over the full range of collision energies (see Sec. <ref>).
More-detailed analysis <cit.> shows that the relation n_h = α n_s^2 is required by data, with α≈ 0.006 for 200 GeV collisions <cit.> (and see Fig. <ref>, left). Adding the condition n_ch = n_s + n_h defines n_s(n_ch) and n_h(n_ch) in terms of corrected total multiplicity (as opposed to detectedn̂_ch≈ n_ch/2). The resulting SP spectrum TCM with fixed parameters and functional forms describes spectra accurately over an interval corresponding to 10-fold increase in the soft component and 100-fold increase in the hard (dijet) component. There was no requirement for physical interpretation of the TCM components as the model was inferred from spectrum data.
§.§ Alternative spectrum models
The TCM derived inductively from spectrum data requires five fixed parameters to describe spectra for any event multiplicity within a large interval <cit.>. Alternative spectrum models could be proposed with free parameters determined for each event class by fits to data. One such model is the so-called “power-law” (n) or Tsallis (q) distribution that is similar in form to Eq. (<ref>)
P(x_t) = A/[1 + x_t / n T]^n,
where x_t is p_t <cit.> or m_t <cit.> and n ↔ 1/(q-1).
Figure <ref> shows the result of fitting the spectrum data from Fig. <ref> (left) with Eq. (<ref>). On the left are fit residuals for ten multiplicity classes in units of statistical uncertainty for each value. That format differs from more-conventional presentations in terms of data/model ratios that strongly suppress residuals at lower . Especially for lower event multiplicities those fits should be rejected given standard criteria. On the right are values for parameter n derived from fitting Eq. (<ref>) to spectrum data (solid points) and to the TCM described above (open points) compared to the fixed value n_0 = 12.8 for the TCM itself. The variation descends from values exceeding the TCM soft-component value for lower to values consistent with the jet-related TCM hard component for larger as dijet production increases ∝n̂_ch^2 per Ref. <cit.>.
It could be argued that the TCM for spectra is simply one of several competing spectrum models and that the power-law model should be preferred as requiring only two parameters compared to five for the TCM. However, the TCM is not a simple fitting exercise applied to a single data spectrum. Ref. <cit.> established that a spectrum TCM inferred inductively from the dependence of spectra without any a priori assumptions is necessary to describe consistently an ensemble of high-statistics spectra over a large range (e.g. factor 10). Separation of spectrum data into two components does not depend on an imposed model. Specific model functions were introduced only after resolution of data spectra into two components based on trends.
In contrast, what emerges from power-law fits is twenty parameter values to describe ten multiplicity classes as opposed to five for the TCM. The large penalty for poor fits in Figure <ref> (left) is even more determining. The dramatic variation of power-law parameter n in the right panel has no a priori explanation, whereas in a TCM context the variation occurs because an inappropriate (soft) model attempts to accommodate quadratic increase of the jet-related (hard) spectrum component with .
§ EVENTWISE-RECONSTRUCTED JETS
Jets may be reconstructed eventwise from final-state hadrons within the full hadron momentum space including scalar momentum and angular correlations relative to an inferred leading-parton four momentum. This section emphasizes scalar-momentum dependence of reconstructed jets and jet fragments. A joint density distribution on parton (jet) energy E_jet and scalar fragment momentum p_frag can be factorized as P(E_jet,p_frag) = P(E_jet) P(p_frag|E_jet), where conditional distribution P(p_frag|E_jet) → D_p^h(p_frag|E_jet) is a fragmentation function (FF) for parton type p fragmenting to hadron type h, and P(E_jet) → dσ_j/dE_jet is the jet energy spectrum for a given dijet source (e.g. collisions). In this section simple and accurate parametrizations of isolated-jet data for and collision systems are based on logarithmic rapidity variables, with E_jet↔ p_t to accommodate some conventional notation.
§.§ p-p jet (scattered-parton) energy spectra
A QCD-related energy dependence is typically of the form log(Q/Q_0) where Q_0 represents some characteristic energy scale. A description of fragmentation functions in terms of rapidity variable y = ln[(p + E) / m_h] ≈ln(2p/m_h) as in Ref. <cit.> is presented in the next subsection.
A jet spectrum near midrapidity for collision energy √(s) can be written in terms of a jet “rapidity” by
p_t d^2 σ_j/dp_t dη = d^2σ_j/dy_max dη,
where y_max≡ln(2 E_jet / m_π) was first defined in Ref. <cit.> in connection with fragmentation functions as summarized in the next subsection and E_jet→ p_t as noted above.
Study of jet-related yields, spectra and angular correlations in 200 GeV collisions reveals that the jet-related (hard-component) density dn_h/dη (and presumably jet production dn_j/dη) scales with the soft-component density as dn_h/dη∝ (dn_s / dη)^2.
Given that relation and dn_s / dη∝log(s/s_0) ≡ 2Δ y_b (with √(s_0)≈ 10 GeV) near midrapidity <cit.> the number of MB dijets (dominated by lowest-energy jets) appearing near midrapidity in collisions should vary with collision energy as <cit.>.
d^2σ_j/dy_max dη ∝ Δ y_b^2 near some E_min.
Kinematic constraints impose the upper limit 2E_jet < √(s) (y_max < y_b) with y_b = ln(√(s) / m_π). Evidence from jet <cit.> and SP-hadron <cit.> spectra suggests a lower limit E_jet > E_min (y_max > y_min) with E_min≈ 3 GeV. A normalized jet rapidity variable can then be defined by
u = y_max - y_min/y_b - y_min = log(E_jet / E_min)/log(√(s) / 2 E_min)∈ [0,1].
Figure <ref> (left) shows ISR and Spp̄S jet spectrum data (points) with the jet spectra rescaled vertically by factor (Δ y_b)^2 and parton rapidity y_max rescaled horizontally to u assuming E_min≈ 3 GeV.
All spectrum data for collision energies below 1 TeV fall on the common locus 0.15 exp(-u^2/2σ_u^2) (solid curve).
The parametrized parton spectrum conditional on beam energy is then
d^2σ_j/dy_max dη = 0.026 Δ y_b^2 1/√(2πσ^2_u) e^-u^2 / 2 σ^2_u,
where 0.026/√(2πσ^2_u) = 0.15 and σ_u ≈ 1/7 is determined empirically from the data. All jet production over nine decades is then represented by parameters √(s_0)≈ 10 GeV, E_min≈ 3 GeV and σ_u ≈ 1/7. Endpoints √(s_0) and E_min are closely related by kinematic constraints on fragmentation to charged hadrons from low-x gluons <cit.>.
Figure <ref> (right) shows ISR and Spp̄S spectrum data from the left panel plotted vs p_t ↔ E_jet in a conventional log-log format. The curve for each beam energy is defined by Eq. (<ref>). The dotted curve corresponds to √(s) = 630 GeV <cit.>. The model curves extend to u = 0.9 corresponding to partons with momentum fraction x = 2E_jet / √(s)≈ 2/3 beyond which the collision-energy constraint should strongly influence the spectra.
§.§ Fragmentation functions
Dijet formation depends on parton energy scale Q = 2E_jet with rapidity y = ln[(E + p)/m_π] (for an unidentified hadron fragment with scalar momentum p) and maximum rapidity y_max as defined below Eq. (<ref>) to describe FFs with D(y|y_max) = 2dn_ch,j/dy, the fragment rapidity density per dijet.
The FF parametrization is D(y|y_max) = 2 n_ch,j(y_max) β(u;p,q)/y_max, where β(u;p,q) is a unit-normal (on u) beta distribution, u = (y - y_min) / (y_max - y_min) ∈ [0,1] is a normalized fragment rapidity, and parameters p and q (for each parton-hadron combination) are nearly constant over a large jet energy interval <cit.>. The total fragment multiplicity 2 n_ch,j(y_max) (from two jets) is inferred from the shape of β(u;p,q) (and therefore parameters p and q) via parton (jet) energy conservation.
Figure <ref> (left) shows measured FFs (points) for three dijet energies <cit.> extending down to very low fragment momentum (less than 100 MeV/c). When plotted on fragment rapidity y the FFs show a self-similar evolution with maximum rapidity y_max. The solid curves show the corresponding FF parametrization developed in Ref. <cit.> based on the beta distribution as noted above.
Figure <ref> (right) shows the self-similar data in the left panel rescaled to unit integral and plotted on scaled fragment rapidity u with y_min≈ 0.35 (p ≈ 50 MeV/c). The solid curves are corresponding beta distributions with parameters p and q nearly constant over a large jet energy interval. The simple two-parameter description is accurate to a few percent within the E_jet interval 3 GeV (y_max≈ 3.75) to 100 GeV (y_max≈ 7.25) <cit.>. FF data for light-quark and gluon jets are parametrized separately but the parametrizations for gluon and light-quark jets converge near E_jet = 3 GeV.
All minimum-bias jet fragment production can be described with a few universal parameters via introduction of logarithmic rapidities.
FFs for isolated dijets derived from collisions are quite different from FFs derived from or collisions as noted below. Minor differences might arise from alternative jet reconstruction algorithms, but larger observed differences suggest that the concept of universality may not apply to FFs from distinct collision systems.
Figure <ref> (left) shows FFs for ten dijet energies from 78 to 573 GeV inferred from 1.8 TeV collisions (points) using eventwise jet reconstruction <cit.>. The solid curves are a parametrization. Comparison with the FF data in Fig. <ref> (left) (e.g. dashed curves in this panel for 2E_jet = 6 and 91 GeV) reveals that a substantial portion of dijet FFs at lower fragment momenta may be missing from reconstructed FFs. This comparison is dominated by quark jets, but quark and gluon FFs converge near E_jet≈ 3 GeV where most MB jets appear.
Figure <ref> (right) shows the ratio of FF data in the left panel to the FF parametrization for each jet energy (points), revealing systematic differences. The solid curve is tanh[(y-1.5)/1.7] which describes measured FFs relative to FFs for dijet energies below 70 GeV. FF parametrizations for quark and gluon jets used for collisions in the present study are the parametrizations of Ref. <cit.> and Fig. <ref> multiplied by the same tanh factor for both quark and gluon FFs.
§ JET CONTRIBUTIONS TO P-P SPECTRA
The MB dijet contribution to spectra and other momentum-space measures should consist of all hadron fragments from all dijets emerging from a given collision system and appearing within a certain angular and acceptance. Given the jet spectrum and FF parametrizations in the previous section the resulting MB fragment distribution can be obtained from a convolution integral.
§.§ Minimum-bias fragment distributions
The midrapidity η density of MB dijets from non-single-diffractive (NSD) collision is estimated by
f_NSD = 1/σ_NSDdσ_j/dη ≈ 0.028 at 200 GeV
given σ_NSD≈ 36.5 mb <cit.> and dσ_j/dη≈ 1 mb <cit.> at that energy. The ensemble-mean fragment distribution for MB dijets is defined by the convolution integral
D̅_u(y) ≈ 1/dσ_j/dη∫_0^∞ dy_max D_pp(y|y_max) d^2σ_j/dy_maxdη,
where subscript u denotes unidentified-hadron fragments.
Given that a spectrum hard component H(y_t) represents hadron fragments from MB dijets it can be expressed as y_t H(y_t) ≈ϵ f_NSDD̅_u(y), where ϵ(Δη,Δη_4π) ∈ [0.5,1] is the average fraction of a dijet appearing in detector acceptance Δη compared to effective 4π acceptance Δη_4π (which depends on collision energy). At 200 GeV ϵ≈ 0.6 within detector acceptance Δη = 2.
The spectrum hard component so defined represents the fragment contribution from MB scattered parton pairs into acceptance Δη. It is assumed that for midrapidity jets y_t ≈ y (i.e. collinearity) except for low-momentum fragments (e.g. p_t < 0.5 GeV/c).
Fig. <ref> (left) shows a surface plot of the Eq. (<ref>) integrand—D_pp(y|y_max) d^2σ_j/dy_max dη—incorporating FFs from Fig. <ref> (left) and the 200 GeV jet (→ y_max) spectrum from Fig. <ref> (right). The FFs are bounded below by y_min≈ 1.5 (p ≈ 0.3 GeV/c). The jet-spectrum effective lower bound is E_min≈ 3 GeV. The z axis is logarithmic to show all distribution structure.
Fig. <ref> (right) shows the corresponding mean fragment distribution ϵD̅_u(y) as a projection (dashed) described by Eq. (<ref>) and compared to hard-component data from 200 GeV NSD collisions (solid points <cit.>) in the form y_t H(y_t) / f_NSD corresponding to their relation in the text just below Eq. (<ref>). The open boxes are 200 GeV jet-spectrum data from Ref. <cit.>. The dash-dotted curve S_p is Eq. (<ref>) reduced by factor 2/3 so D̅_u(y) from Eq. (<ref>) (dashed) best accommodates the SP spectrum data (solid points).
The apparent difference between calculated and measured jet spectra falls within the systematic uncertainties of Ref. <cit.>. On the other hand the FF parametrization from Fig. <ref> extrapolated down to E_jet < 10 GeV (y_max < 5) may overestimate the fragment yield there substantially. In any case the measured spectrum hard component from 200 GeV NSD collisions is quantitatively consistent with a dijet contribution derived from eventwise-reconstructed jets <cit.>. Given the spectrum hard-component mode near p_t = 1 GeV/c this comparison also establishes that a parton (jet) spectrum effective lower limit E_min = 3.0 ± 0.2 GeV is required by 200 GeV spectrum data. Extrapolating the (≈ power-law) jet spectrum significantly below 3 GeV (e.g. as in some Monte Carlos) would result in a large overestimate of H(y_t) for fragment momenta below 1 GeV/c.
§.§ Measured spectrum hard components
The comparison in Fig. <ref> provides convincing evidence from a specific 200 GeV NSD SP spectrum that its TCM hard component does indeed represent a MB jet fragment distribution. A more-recent study reveals what can be learned from and collision-energy dependence.
Figure <ref> (left) shows the evolution of a revised TCM spectrum hard-component model as derived in Ref. <cit.> based on recent high-statistics spectrum data from Ref. <cit.>. The variation below the hard-component mode near y_t = 2.7 is already apparent in the early data of Ref. <cit.> shown in Fig. <ref> (right). Whereas the model shape was initially held fixed independent of to retain simplicity, the revised TCM of Ref. <cit.> accommodates all significant variation of spectrum data. Shape variations below and above the mode are evidently tightly correlated. Interpreted in a jet-related context the revised model suggests that as larger event multiplicities are required the underlying jet spectrum is biased to more-energetic jets with larger fragment multiplicities.
It is important to note that while the TCM hard component was initially modeled by a simple Gaussian on <cit.> subsequent detailed analysis of and å spectrum data combined <cit.> revealed that an exponential tail for the Gaussian on is required for the hard-component model. The corresponding power-law trend on required by data for 200 GeV collisions is then ≈ 1/p_t^7 as indicated by the dashed line in Figure <ref> (left). The power-law exponent evolution with collision energy in Figure <ref> (right) <cit.> is compatible with jet spectrum measurements <cit.>. In contrast, the “power law” of soft component Ŝ_0(m_t) corresponding to ≈ 1/p_t^13 at 200 GeV appears to be unrelated to jet physics.
Figure <ref> (right) shows spectrum hard components for a range of collision energies from SPS to top LHC energies based on the analysis in Ref. <cit.>.
The main variation is reduction of the power-law exponent describing the distribution high- tail (note dashed lines at right) which can be compared with the jet-spectrum evolution shown in Fig. <ref> (right). The close correspondence between TCM spectrum hard components and jet spectra provides additional support for interpretation of the hard component as a MB jet fragment distribution.
§.§ Hadron yields and p̅_t vs p-p multiplicity
TCM analysis of differential spectrum structure as described above can be supplemented by statistical measures, e.g. integrated yields n_x or mean angular densities ρ̅_x within some angular acceptance and ensemble-mean . Given TCM soft and hard components inferred from differential spectra the integrated multiplicities n_s and n_h and ensemble means p̅_ts and p̅_th can be computed. Figure <ref> (right) shows an initial comparison from Ref. <cit.> suggesting a quadratic relation between n_h and n_ch. A study of recent high-statistics spectrum data from Ref. <cit.> establishes more accurate relations.
Figure <ref> (left) shows ratio n_h / n_s vs soft-component mean density ρ̅_s = n_s / Δη for ten multiplicity classes <cit.>. n_h is the integral of hard-component H(y_t) appearing differentially in Fig. <ref> (left) and as a running integral in Fig. <ref> (right) (excesses above unity at right). The linear trend for n_h/n_s confirms the relation ρ̅_h ∝ρ̅_s^2 <cit.>. Soft-component density ρ̅_s may be interpreted as a proxy for the density of low-x gluons released from projectile nucleons in a collision. spectrum data then reveal that the number of midrapidity dijets ∝ρ̅_h varies quadratically with number of participant gluons. But in an eikonal collision model the number of gluon-gluon binary collisions should vary as the dashed curve representing ρ̅_h ∝ρ̅_s^4/3 as for the Glauber model of å collisions. The data appear inconsistent with the eikonal model.
Figure <ref> (right) shows vs soft-component mean density ρ̅_s well described by a constant plus linear term. The solid line is a TCM for that quantity derived from the spectrum TCM of Eq. (<ref>). Given the evidence in this section one may conclude that variation with ρ̅_s is determined entirely by the jet-related hard component.
§ JET CONTRIBUTIONS TO A-A SPECTRA
The TCM provides an important reference for å collisions. It is reasonable to expect more-peripheral å collisions to be described by the same basic model elements modulo the Glauber model of å collision geometry – dependence on number of participant nucleons N_part and binary collisions N_bin, with ν≡ 2N_bin / N_part as the mean number of binary collisions per participant. However, it has been established that jets are strongly modified in more-central å collisions. The TCM hard-component model should then be altered to accommodate such changes.
§.§ Identified-hadron spectrum evolution
The TCM for spectra can be extended to describe data from å collisions (and identified hadrons) as in Ref. <cit.>. The TCM reference for å collisions corresponding to Glauber linear superposition (GLS) of isolated collisions is based on the result in Eq. (<ref>) evaluated for inelastic collisions as represented by
ρ̅_0(y_t) ≈ S_NN(y_t) + H_NN(y_t).
By hypothesis the TCM soft component in å collisions should scale with N_part/2 and the hard component should scale with N_bin leading to the å spectrum TCM
ρ̅_0(y_t,b) ≈ (N_part / 2) S_NN(y_t) + N_bin H_AA(y_t,b)
2/N_partρ̅_0(y_t,b) ≈ S_NN(y_t) + ν H_AA(y_t,b),
where the soft component is assumed to be invariant with å centrality but the jet-related hard component may vary substantially and is then a principal object of study.
Figure <ref> shows identified-pion and -proton spectra (solid curves) for five centrality classes of 200 GeV collisions plotted vs rapidity y_t(π) with pion mass assumed <cit.>. The rapidity variable is used in this case simply as a logarithmic momentum variable y_t ≈ln(2 p_t / m_π) but with well-defined zero. That choice is explained below. Also plotted with the pion spectra is the unidentified-hadron spectrum for 200 GeV NSD collisions (points) <cit.>. The TCM soft components S_NNx (dotted) are defined as the limits of normalized spectrum data as N_part→ 0, equivalent to the definition for collisions. The pion soft component is consistent with the soft component inferred from unidentified hadrons. The pion hard component H_NN is also consistent with the analysis. The proton soft and hard components have the same algebraic structure, but model parameters are adjusted to accommodate peripheral data as described below. The dash-dotted curves are GLS reference spectra for ν = 1 and 6 (å limiting cases).
Figure <ref> shows data hard components for identified pions and protons (solid) from five centrality classes of 200 GeV collisions in the form ν H_AA per Eq. (<ref>). Also plotted are the hard component for unidentified hadrons from 200 GeV NSD collisions (points) and hard-component models H_NNx (dashed). The dotted curves are GLS references corresponding to the five centrality classes assuming that H_AA = H_NN (i.e. linear scaling with factor ν). Relative to those reference curves the data exhibit substantial suppression at higher for more-central collisions as inferred from conventional ratio measure R_AA <cit.>. However, substantial enhancement at lower is a new feature not revealed by R_AA data.
The structure of the proton hard component is a surprise. The mode for (≈) collisions appears near p_t = 1 GeV/c as for pions but the peak width is substantially less, and there is a significant difference in the “power-law” slope at high as described below. However, it is most notable that hard-component maximum values are essentially the same for protons and pions in contrast to the soft components. The similarity of two hard components on is the main motivation for adopting y_t(π) ≈ln(2 p_t / m_π) as the independent variable <cit.>.
§.§ Spectrum ratios for identified hadrons
Figure <ref> shows ratios r_AA = H_AA / H_NN for pions and protons from five centrality classes of 200 GeV collisions (curves of several line styles). Also plotted are hard-component data in ratio to the pion H_NN reference (points). Several features are notable. The peripheral pion and proton data for 60-80% central collisions indicate no jet modification (r_AA = 1), a result consistent with the observation from jet-related 2D angular correlations that below a sharp transition (ST) in jet characteristics near 50% of the total cross section jets remain unmodified in 200 GeV collisions <cit.>.
Above the ST there is increasing suppression at higher , with a saturation value ≈ 0.2 for both pions and protons in central collisions as observed with conventional ratio parameter R_AA. However, because R_AA as defined is a ratio of entire spectra including soft components the evolution of jet-related H_AA below p_t = 3 GeV/c (y_t ≈ 3.75) is visually inaccessible. In contrast, ratio r_AA including only hard components reveals large enhancements of jet-related hadron yields at lower tightly correlated on centrality with suppressions at higher . But whereas enhancement for pions extends below 0.5 GeV/c (y_t = 2) enhancement for protons peaks near 2.5 GeV/c (y_t ≈ 3.6), and below p_t = 1 GeV/c the proton data for all centralities remain consistent with the reference (r_AA = 1) within the detector acceptance.
These jet-related hard-component trends have important consequences for other (e.g. ratio) measures and for the flow narrative. In particular, whereas conjectured radial flow should boost all hadron species to higher proportional to hadron mass the trends in Fig. <ref> show that while protons appear boosted to higher relative to the hard-component mode pions move to lower.
Figure <ref> (left) shows a conventional ratio comparison of proton and pion spectra – the proton-to-pion ratio – for two centralities of 200 GeV collisions (points) <cit.>. The curves are derived from a TCM for spectra including hard-component modification in more-central collisions <cit.> that describes the spectrum evolution in Fig. <ref>. Those curves were not fitted to the ratio data.
Figure <ref> (right) shows the curves in the left panel replotted on to improve visibility of the low- region. Since the TCM soft and hard components contributing to those spectrum ratios are known their ratios can be plotted separately (dotted and dash-dotted curves respectively). The soft component by hypothesis does not vary with å centrality. The hard-component ratio for peripheral å (and therefore ) collisions is unity at the mode ( = 2.7, p_t ≈ 1 GeV/c). It falls off on either side of the mode because of the peak width difference but increases for larger because the proton power-law exponent is smaller (the spectrum high- tail is harder), all consistent with the discussion of Fig. <ref> above.
The hard-component ratio maximum for central collisions has a substantially larger value and the mode on shifts up to 3 GeV/c. Protons are strongly enhanced relative to pions above the mode but pions are strongly enhanced relative to protons below the mode, a feature concealed by the conventional R_AA ratio and plotting format. Comparison with the solid curve matching the ratio data reveals that the data peak is dominated by the jet-related hard component. In contrast, the ratio peak for peripheral collisions is dominated by the soft component; the hard component only influences the peripheral ratio above 7 GeV/c. Note that the ratio of hard to soft hadron production in collisions increases with centrality according to parameter ν with no jet modification (5-fold increase) and by an additional factor 3 due to jet modification <cit.>. Thus, from peripheral to central collisions the hard/soft ratio increases by about 3 × 5 = 15-fold. Although peaks in the left panel appear similar and suggest a common mechanism this TCM analysis reveals that they represent distinct soft and hard hadron production mechanisms. Comparisons of spectrum ratio data with in-vacuum FFs and nonjet (soft) recombination or coalescence hadronization models as in Ref. <cit.> are likely misleading.
§.§ Hadron yields and p̅_t vs A-A centrality
Just as for collisions differential spectrum structure for å collisions can be supplemented by trends for integrated spectrum yields and mean values. For instance, hadron yields vs multiplicity were discussed in Sec. <ref> where observed dijet production appears inconsistent with the eikonal approximation. The centrality dependence of -integral hadron yields (or mean angular densities) in å collisions is similarly of interest.
Figure <ref> (left) shows integrated unidentified-hadron densities in the form (2/N_part) dn_ch/dη (solid points) reconstructed from spectrum data in Fig. <ref> <cit.>. Due to multiplicity fluctuations the most-central point of such a trend is typically high by an amount controlled by the detector angular acceptance: the excess is less for a larger acceptance <cit.>. The open point is a reference value from NSD collisions. A simple TCM with fixed constant x = 0.095 from Ref. <cit.> is represented by the dash-dotted line. A color-glass condensate (CGC) trend approximated by 0.9 ln(8ν) is shown as the dashed curve <cit.>. The hatched region labeled GLS is a reference TCM extrapolation from the TCM of Sec. <ref> assuming no jet modification in collisions. The solid curve is a TCM with varying x based on dijet angular correlations <cit.> combined with the spectrum TCM from Ref. <cit.> that models jet modification in terms of spectrum hard components as in Fig. <ref> (right) below. The hatched band labeled ST marks the “sharp transition” in jet-related angular-correlation characteristics reported in Ref. <cit.> (see Sec. <ref>). Quantitative correspondence of those results with yield data from spectra provides additional compelling evidence that the TCM spectrum hard component remains jet-related in all å collisions <cit.>.
Figure <ref> (right) shows hadron production vs centrality for 2.76 TeV collisions (inverted triangles) and a reference value for NSD collisions (upright triangle) <cit.>. The solid curve is the solid curve for 200 GeV in the left panel extrapolated to higher energy by two modifications. The soft component S_NN is multiplied by 1.87 = log(2760/10) / log(200/10) motivated by the observed soft-component energy trend ∝log(√(s_NN) / 10 GeV) <cit.>. The hard component is then multiplied by another factor 1.87 corresponding to n_h ∝ n_s^2 for collisions <cit.>. Whereas the centrality evolution of jet correlation structure for 200 GeV collisions is consistent with a ST near ν = 3 (50% fractional cross section) <cit.> the data suggest that the ST at higher energy may have shifted down to ν≈ 2. Otherwise the same TCM describes data well at two widely-spaced energies depending only on a log(s/s_0) QCD energy trend.
Figure <ref> (left) shows ensemble-mean data from 5 TeV and 2.76 TeV collisions (points) vs charge density ρ̅_0 = n_ch / Δη <cit.>. The solid and dotted curves are TCMs for the respective collision systems <cit.>. p̅_ts' ≈ 0.51 GeV/c (hatched band) is the soft-component value corresponding to a -acceptance lower limit near 0.2 GeV/c, whereas p̅_ts≈ 0.38 GeV/c is a universal feature of any spectrum extrapolated down to p_t = 0 as in Fig. <ref> (right). The GLS reference (dashed curve) reflects the eikonal approximation assumed for the å Glauber model and no jet modification. The dash-dotted curve is the TCM for 5 TeV data consistent with Fig. <ref> (right) for 200 GeV data and inconsistent with the eikonal approximation. The data (open points) suggest a smooth transition from a non-eikonal trend to eikonal å trend with increasing (centrality).
Figure <ref> (right) shows evolution of the measured TCM spectrum hard component for 200 GeV collisions from peripheral (open points) to central (solid points) collisions. The corresponding dashed and solid curves are derived in Ref. <cit.> from jet measurements. The dash-dotted curve is a GLS reference (no jet modification) for central . It seems likely that spectrum hard-component evolution in 2.76 TeV collisions is similar but with some quantitative differences.
According to the TCM for the GLS trend for 2.76 TeV in the left panel should be a weighted average of soft-component value p̅'_ts≈ 0.5 GeV/c (for p_t > 0.2 GeV/c) and hard-component value p̅_th≈ 1.7 GeV/c (for 2.76 TeV collisions <cit.>), the average increasing with increasing fraction of jet-related hard-component hadrons. The data trend differs from GLS by two consequences of jet modification: (a) the hard-component yield increases by factor 3 and (b) the hard-component mode decreases substantially from 1.7 GeV/c, as illustrated in Figure <ref> (right) and Fig. <ref> (right). The combination results in a net increase of data p̅_t over GLS.
§.§ Alternative spectrum models
Section <ref> presents an alternative to the spectrum TCM in the form of a “power-law” or Tsallis distribution approximating the TCM soft component alone. The fit results indicate poor fit quality, and the large parameter variations are not physically interpretable. The blast-wave model is a popular alternative for å spectra in which deviations from some reference spectrum are assumed to result from radial flow of a bulk medium.
The assumed reference is usually a Maxwell-Boltzmann (M-B) exponential <cit.>, but a power-law or Tsallis distribution has been adopted in recent studies <cit.>. The model parameters include slope parameter T and mean radial speed ⟨β_t ⟩.
Blast-wave fits may be restricted to smaller intervals lying below 2 GeV/c under the assumption that such intervals exclude jet contributions and that any spectrum variation is then determined by flows: “In central Au+Au collisions the flattening of the spectra [below 2 GeV/c] is likely dominated by collective transverse radial flow, developed due to the large pressure buildup in the early stage of heavy-ion collisions” <cit.>.
Figure <ref> (left) illustrates an early radial-flow analysis of spectrum data from 19 GeV ß collisions (solid points) <cit.>. The corresponding M-B distribution is shown by the dash-dotted line. A spectrum for 17 GeV (open points) and a TCM soft-component Lévy distribution (dashed) with universal T_0 = 145 GeV and with exponent n_0 = 17 adjusted to accommodate the S-S data are shown for comparison (n_0 = 27 describes the spectrum <cit.>). Deviation of the ß data from the M-B reference is interpreted to represent radial flow with ⟨β_t ⟩≈ 0.25 (β_s ≈ 0.5 is the maximum for a radial β_t distribution). The same fit model applied to a 200 GeV spectrum returns a similar ⟨β_t ⟩ value as noted below.
Figure <ref> (right) shows published ⟨β_t ⟩ values derived from fits to 62 GeV (open points) and 200 GeV (solid points) spectra for several collision centralities <cit.> plotted vs Glauber centrality parameter ν. The location of the ST inferred from jet-related angular correlations <cit.> is indicated by the hatched band. To the left of that point collisions are effectively transparent linear superpositions (GLS) of collisions <cit.>, but to the right of that point jet structure shows substantial modification (“jet quenching”). It is notable that the inferred ⟨β_t ⟩ data are not zero in the å transparency interval (or for collisions). The ⟨β_t ⟩ data instead increase more rapidly in a centrality interval where rescattering is less likely based on jet data but increase less rapidly in an interval where jet modification is substantial suggesting copious rescattering is more likely. Results in Fig. <ref> (right) interpreted to indicate radial flow are in direct conflict with the results in Fig. <ref> consistent with measured jet properties. In the blast-wave model jet-related spectrum structure described by the TCM hard component is in effect reassigned to radial flow.
§ JETS AND ANGULAR CORRELATIONS
Angular correlation methods are introduced briefly in Sec. <ref>. A measured pair density ρ(x_1,x_2) can be compared with some reference ρ_ref(x_1,x_2) = ρ̅_0(x_1)ρ̅_0(x_2) ≈ρ_mix(x_1,x_2) to define a correlated-pair density
Δρ(p_t1,p_t2,η_Δ,ϕ_Δ) ≡ρ_ref (ρ' / ρ_mix - 1),
where ρ' is an uncorrected pair density and ρ_mix is a mixed-pair reference also uncorrected. The correlated-pair density can be normalized to form a per-particle measure Δρ / √(ρ_ref)≡√(ρ_ref)(ρ' / ρ_mix - 1) →ρ̅_0(ρ' / ρ_mix - 1) or a per-pair measure Δρ / ρ_ref =(ρ' / ρ_mix - 1) <cit.>.
§.§ A TCM for two-particle correlations
In addition to angular correlations on (η_Δ,ϕ_Δ) two-particle correlations can be studied on transverse rapidity as (y_t1,y_t2) → y_t × y_t where they are directly comparable with SP spectra on and the -spectrum TCM.
Figure <ref> (left) shows correlations from 200 GeV NSD collisions for p_t ∈ [0.15,6] GeV/c <cit.>. The two peaked features are identified as TCM soft and hard components as follows. The lower- peak falls mainly below 0.5 GeV/c (y_t < 2) and consists exclusively of unlike-sign (US) pairs. Corresponding angular correlations consist of a narrow 1D peak on η_Δ centered at the origin. That combination suggests longitudinal fragmentation of low-x gluons to charge-neutral hadron pairs nearby on η, consistent with TCM spectrum soft component S_pp(y_t). The higher- peak falls mainly above 0.5 GeV/c with mode near p_t = 1 GeV/c (y_t ≈ 2.7) corresponding to SP spectrum hard component H_pp(y_t) and with charge structure (LS or US) depending on angular constraints.
Figure <ref> (right) shows angular correlations for the same collision system with the condition p_t ≈ 0.6 GeV/c, i.e. near the lower boundary of the hard component in the left panel. Despite the low hadron momentum the observed angular correlations exhibit structure expected for jets: a SS 2D peak representing intra jet correlations and an AS 1D peak representing inter jet (back-to-back jet) correlations. The SS peak is dominated by US pairs while the AS peak shows no preference, consistent with fragmentation of back-to-back charge-neutral gluons.
§.§ 2D angular correlations and model fits
The general structure of 2D angular correlations was established for collisions in Refs. <cit.> and for collisions in Ref. <cit.>. Quantitative discrimination was achieved among jets, a nonjet (NJ) azimuth quadrupole and Bose-Einstein correlations (BEC). 2D angular correlations on (η_Δ,ϕ_Δ) have a simple structure modeled by a few 1D and 2D functions <cit.>.
The six-element fit model of Ref. <cit.> includes eleven model parameters but describes more than 150 data degrees of freedom for typical 25× 25-bin data histograms on (η_Δ,ϕ_Δ). Model parameters are thus strongly constrained. The NJ quadrupole component of angular correlations can be extracted accurately via such fits. For per-particle 2D angular correlations as Δρ / √(ρ_ref) the NJ quadrupole is represented by A_Q{2D}≡ρ̅_0 v_2^2{2D} <cit.>.
Figure <ref> shows 2D angular correlations from two of seven multiplicity classes of 200 GeV collisions (index values n = 1, 6, dn_ch/dη≈ 1.8, 15, for left and right panels respectively) <cit.>. Based on 2D model fits contributions from a soft component (1D peak on η_Δ) and BEC (narrow 2D peak at the origin) have been subtracted. What remains is a jet contribution (broad SS 2D peak at the origin and AS 1D peak on azimuth) and a NJ quadrupole contribution manifested in the right panel by increased curvature of the AS 1D peak, reduced curvature of the SS background for |η_Δ| > 1 and apparent narrowing of the SS 2D peak on azimuth. Systematic variations of several correlation components are presented in Ref. <cit.>. The main message of such studies is that MB dijets dominate correlation structure.
Figure <ref> shows 2D angular correlations from the most peripheral (left) and most central (right) 200 GeV collisions. The statistical errors for those histograms are about 4.5 times larger than for the high-statistics data in Fig. <ref>. The same six-element 2D fit model was applied to the data <cit.>. The peripheral data are approximately equivalent to NSD data similar to Figure <ref> (left) (but before subtraction of two model elements). The NJ quadrupole in those panels is negligible compared to both the jet-related structure (in both panels) and the soft component (1D peak on η_Δ at the origin in the left panel). Note the narrower BEC 2D peak atop the broader jet-related SS 2D peak in each panel.
Figure <ref> shows fitted parameter values vs Glauber centrality parameter ν for 2D model fits to 200 GeV data as in Fig. <ref> including (a) the SS 2D peak amplitude A_1, (b) the SS 2D peak η_Δ width σ_η_Δ and (c) the AS 1D peak amplitude A_D <cit.>. The data systematics reveal two intervals on ν with markedly different behavior: (i) variation of three parameters consistent with Glauber linear superposition (GLS) for ν < 3 equivalent to å transparency and (ii) large increases in the rate of variation for ν > 3 where ν≈ 3 corresponds to a factional cross section σ / σ_0 ≈ 0.5. The rapid change from one trend to the other is characterized as a “sharp transition” or ST in Ref. <cit.>. Note that the SS peak amplitudes in panel (a) for 62 GeV coincide for those for 200 GeV when rescaled by factor 1/0.63 representing a ln(√(s) / 10 GeV) energy trend for jet-related structure that persists above the ST <cit.>. The data in panels (b) and (c) are not rescaled. The close correspondence among three correlation parameters strongly suggests that MB dijets remain the dominant source of correlation structure in collisions even for the most-central collisions, although jets are substantially modified there as demonstrated by spectrum data (e.g. Fig. <ref>).
§.§ Trigger-associated 1D correlation analysis
Data and analysis methods presented above correspond to MB dijets with no conditions imposed on jet structure. Model fits to 2D angular correlations integrated over the entire acceptance arguably extract almost all angular-correlation information carried by primary particle data. Alternative methods (a) impose trigger-associated conditions on data, (b) are typically confined to 1D projections onto azimuth and (c) subtract model-dependent backgrounds to arrive at nominal jet-related structure. Two examples are considered below.
Figure <ref> shows a highly-cited (over 700 citations) trigger-associated analysis of 200 GeV azimuth correlations compared to and data <cit.>. Trigger-associated conditions are p_t,trig∈ [4,6] GeV/c and p_t,assoc∈ [2 GeV/c,p_t,trig], admitting only a tiny fraction of all jet fragments observed in spectrum hard components (as in Fig. <ref>). In panel (b) backgrounds have been subtracted including a v_2 contribution (for data) based on published v_2 data that may include a jet contribution in the form of “nonflow.” The principal message is that the “away-side jet” is suppressed (disappears) in central collisions. The suppression is attributed to absorption of jets in a dense medium.
Indications of some form of jet modification (e.g. reduction of the AS peak amplitude) are clearly apparent but the full implications are not clear. The AS peak “disappearance” suggested by Fig. <ref> is consistent (within statistics) with the factor-5 high- reduction in SP spectra indicated in Fig. <ref>, but there is no information about changes in jet structure (possible enhancements) at lower as in Fig. <ref>. Imposition of a high- trigger on one jet may tend to bias its partner jet to softer fragmentation – fewer higher- fragments but more lower- fragments – independent of any medium effects. One cannot then conclude from such biased data that the AS jet has “disappeared.” More can be learned by relaxing the associated-particle cut.
Figure <ref> (left) shows a similar trigger-associated analysis of 200 GeV azimuth correlations with the associated-particle condition extended down to the detector-acceptance lower bound: p_t,assoc∈ [0.15 GeV/c,p_t,trig] <cit.>. The result (solid points) is compared with data treated similarly (open points).
A combinatoric background to be subtracted from “raw” data is defined by published v_2 data and a ZYAM ( zero yield at minimum) principle based on the ad hoc assumption that SS and AS jet peaks never overlap on azimuth. The data minimum value after background subtraction is then defined as the zero for both and data.
Consequences of the updated analysis are two-fold: (a) the AS peak remains substantial, has not “disappeared” as reported in Ref. <cit.>, but (b) the AS peak for central collisions appears to be softened and broadened compared to individual () collisions – interpreted to signal “progressive equilibration” of the AS jet in the medium and “thermalization within the m
An alternative treatment of the same basic correlation data leads to different conclusions. The data are fitted with a 1D projection of the 2D fit model from Sec. <ref> described in Refs. <cit.> including a 1D Gaussian for the SS peak (dash-dotted), a dipole term for the AS peak (dashed) and a quadrupole term that should correspond to v_2 data (dotted). The inferred negative value for 2v_2^2 is notable. The fit to data is the bold solid curve in Figure <ref> (left) that describes those data within statistical uncertainties. The fit parameters then estimate the actual background subtracted in Ref. <cit.>. The ZYAM offset is 1.37 (lower dash-dotted line) and the assumed background v_2^2 value is |2 v_2^2| ≈ 0.43 (amplitude of dotted curve). The nonjet quadrupole amplitude for 0-5% central 200 GeV collisions inferred from model fits to 2D angular correlations is essentially zero (upper limit consistent with data uncertainties) <cit.>. The ZYAM |2 v_2^2| ≈ 0.43 value can then be compared with the quadrupole component of the SS jet peak 2 v_2^2{2}≈ 0.25 × 3.36 = 0.84 <cit.> and an assumption that the “true” background value is v^2_2 = (v^2_2{2} + v^2_2{4})/2 ≈ v^2_2{2}/2 [Ref. <cit.>, Eqs. (9) and (10)] so the jet-related value for central is 2v_2^2 ≈ 0.42. It is then likely that the 2v_2^2 value adopted for the ZYAM subtraction is dominated by the jet-related SS peak Fourier decomposition <cit.>. In essence, the quadrupole component of the SS jet peak is subtracted from raw data to produce a distorted result.
Figure <ref> (right) shows the same data with the ZYAM background subtraction in effect reversed. The data are described within uncertainties (solid) by SS Gaussian (dash-dotted) plus AS dipole (dashed). The data (open points) have been treated similarly except with no quadrupole term and are also described within uncertainties by the same basic model (bold dotted). Aside from the amplitude differences the SS peak is broader (σ_ϕ≈ 0.7) than the SS peak (σ_ϕ≈ 0.5) (consistent with 2D correlation trends reported in Ref. <cit.>). The and AS peaks are described by the same dipole model implying equivalent large widths. The peak amplitudes indicate that the number of jet fragments per trigger is about 70% larger in central than in collisions, consistent with the lower- enhancements evident in Fig. <ref>. The SS and AS jet peaks strongly overlap in all collision systems (except for very high cuts) consistent with 2D model fits from Ref. <cit.> and as illustrated in Figs. <ref> and <ref>. Jet-related correlation analysis on 1D azimuth based on ZYAM background subtraction is thus strongly inconsistent with data properties derived from other contexts.
§.§ Bayesian analysis of azimuth correlations
It could be argued that the results in the previous subsection depend on a chosen fit model and are therefore arbitrary. Bayesian analysis provides neutral criteria for comparison of data models based on competition between goodness of fit (e.g. the χ^2 measure) and cost of model complexity (based on information parameter I) as measured by evidenceE with -2ln(E) = χ^2 + 2I <cit.>.
Figure <ref> shows 2D angular correlation data from 0-5% central 200 GeV collisions as reported in Ref. <cit.> projected onto 1D azimuth difference ϕ_Δ. A calculated distribution integral has been subtracted from the data. The dashed curve is a fit to data with the same 1D model used in Figure <ref>. A fit with a Fourier-series (FS) model would achieve the same apparent result given a sufficient number of terms. Which model should be preferred?
Figure <ref> shows negative log evidence-2LE ≡ -2ln(E) values for several competing models, where Bayesian evidence E measures the competition between goodness of fit (better fit increases the evidence) and cost of model complexity (more complexity decreases the evidence). With increasing model complexity (e.g. number of parameters) improved fit quality could reduce χ^2 but at the cost of increasing information I. The entry labeled “Model” (solid square) consisting of SS Gaussian and AS dipole as applied to the data in Fig. <ref> achieves the minimum log-evidence value (largest evidence). Adding an m = 2 quadrupole term (solid diamond) increases -2LE ( decreases evidence E) because the additional model parameter does not achieve a compensating improvement in fit quality (consistent with results from Refs. <cit.>). The same is true for other additions (m = 3 sextupole and m = 4 octupole terms).
The -2LE trend for a Fourier cosine-series (FS) model (solid dots) achieves a minimum at four elements (including an m = 3 sextupole term) and then increases monotonically. In all cases the FS model (commonly interpreted to represent flows) is strongly rejected by Bayesian analysis compared to the simple two-peaked model (consistent with MB dijet production as in Fig. <ref>). In Ref. <cit.> the large difference is traced to the predictivity of a model. The term in the negative log evidence that increases with model complexity is information I gained by a model upon acquisition of new data.[Information can be described as a logarithmic measure of volume reduction. In this case “volume” refers to some part of a fit-model parameter space. Information I compares the volume allowed before data (prior) to that allowed after data (posterior) <cit.>.] A fixed model or one with few parameters may gain little or no information from the addition of new data and is thus highly predictive, might be falsified by new data. Predictivity and falsifiability are equivalent concepts. In contrast a Fourier series on periodic azimuth has no predictivity, can describe any data distribution. Thus, introduction of new data adds substantial information to the FS model in the form of parameter adjustments that lead to corresponding increase in the negative log evidence. In effect, the two-peak model predicts that any azimuth distribution from high-energy nuclear collisions should include peaks at 0 (SS) and π (AS), and the SS peak should be substantially narrower than the AS peak (as expected for MB dijets). New data should modify only the SS peak width and the peak amplitudes. Any data with different features would falsify that model but would remain well described by a FS model.
§.§ Predicting trigger-associated correlations
Trigger-associated (TA) analysis of jet structure in 1D azimuth distributions, such as summarized in Sec. <ref>, follows precedents established at lower energies during a period when the jet concept was not well established and detector technology was quite limited <cit.>. With much-higher collision energies, established jet phenomenology, higher-statistics and å data and much-improved particle detectors the analysis of full pair-momentum space is both possible and necessary to access all information carried by particle data. Correlation data with and without conditions should be combined quantitatively with SP-spectrum and yield data to provide the strongest possible challenge to competing theories and better inductive understanding of underlying mechanisms.
This subsection summarizes a study in Ref. <cit.> where the pair density on asymmetric TA rapidity space (y_t,trig,y_t,assoc) for 200 GeV collisions is predicted based on jet data and the TCM for SP spectra. When symmetrized the distribution on (y_t,trig,y_t,assoc) can be compared with the distribution on in Fig. <ref> (left).
In a MB TA analysis all collision events and all hadrons within a collision are accepted for analysis. A “trigger particle” is the single hadron in each collision with the highest ; all other hadrons are “associated.” Some fraction of all trigger hadrons may be related to jets (the leading detected hadron in a jet serving as proxy for the leading parton), and some fraction of all associated hadrons may be fragments from a triggered jet or its back-to-back partner jet. The TA pair distribution on (y_t,trig,y_t,assoc) resulting from such conditions includes soft-soft (SS), soft-hard (SH) and hard-hard (HH) pair combinations. It is assumed that the HH contribution is amenable to prediction based on measured jet properties. A full analysis involves two parts: (a) predict the HH contribution to MB TA correlations <cit.> and (b) extract the HH contribution from measured MB TA correlations for direct comparison based on the SP spectrum TCM <cit.>. The full derivations are summarized schematically below.
(a) The HH component of TA correlations can be predicted from measured conditional FFs D_u(y|y_max) and jet (parton) spectrum Ŝ_p(y_max) = (1/σ_j) d^2σ_j/dy_max/dη introduced in Sec. <ref> <cit.>. FF distribution D_u(y|y_max) is first decomposed into a trigger component Ŝ_t(y_trig|y_max) and an associated component D_a(y_assoc|y_max) based on void probability G_t(y|y_max). The void probability is defined as the Poisson probability that no fragment appears for y > y_trig based on the FF integral over that interval, which is just the probability that a fragment at y = y_trig is a trigger particle for given y_max. Trigger spectrum Ŝ_t(y_trig) is then obtained by convoluting Ŝ_t(y_trig|y_max) with the measured jet spectrum Ŝ_p(y_max). An intermediate conditional spectrum is obtained from Bayes' theorem as Ŝ_p(y_max|y_trig) = Ŝ_t(y_trig|y_max)Ŝ_p(y_max) / Ŝ_t(y_trig). The associated spectrum D_a(y_assoc|y_max) is the complement of Ŝ_t(y_trig|y_max) in D_u(y|y_max). The associated fragment distribution D_a(y_assoc|y_trig) is obtained by convoluting D_a(y_assoc|y_max) with Ŝ_p(y_max|y_trig), thus eliminating jet rapidity y_max. The TA HH distribution per jet derived from eventwise-reconstructed jet data is then
F_at(y_assoc,y_trig) = Ŝ_t(y_trig) D_a(y_assoc|y_trig).
Figure <ref> (left) shows HH distribution F_at(y_assoc,y_trig) for 200 GeV collisions based on the measured FFs and 200 GeV jet spectrum summarized in Sec. <ref>. The mode on y_trig corresponds to p_t ≈ 1.2 GeV, and the mode on y_assoc corresponds to p_t ≈ 0.6 GeV/c. Data from eventwise-reconstructed jets thus predict that the great majority of jet-related TA pairs appear near 1 GeV/c.
Figure <ref> (right) shows 1D projections D_a(y_assoc) (solid) and Ŝ_t(y_trig) (dash-dotted). The dashed curve shows D_a(y_assoc) with F_at(y_assoc,y_trig) extrapolated to large y_trig (beyond the 2D plot boundaries at left).
(b) Isolation of hard component HH from correlation data requires a TA TCM based on the SP spectrum TCM in Sec. <ref> <cit.>. The TA TCM assumes that hadrons from soft and hard components of the SP spectrum are uncorrelated in pairs. The TA TCM is factorized in the form F_at(y_ta,y_tt) = T̂(y_tt) A(y_ta|y_tt), the product of a unit-integral trigger spectrum and a conditional associated-particle distribution. The derivation distinguishes soft events (no jet within acceptance) from hard events (at least one jet within acceptance) with probabilities P_s and P_h depending on event multiplicity based on measured dijet systematics <cit.>. Because dijets ∝ n_s^2, multiple dijets may appear within acceptance Δη for higher multiplicities. The trigger spectrum is then
T̂(y_tt;n_ch) = P_s(n_ch) T̂_s(y_tt) + P_h(n_ch) T̂_h(y_tt),
where each T̂_x is defined by a corresponding void probability based on the appropriate TCM spectrum model – with or without a spectrum hard component. The TCM for the full TA distribution is then
F_at(y_ta,y_tt;n_ch) = P_s T̂_s A_ss + P_h T̂_h (A_hs + A_hh)
that can be compared with a measured F_at TA distribution. Each column of A_xy(y_ta|y_tt) on y_ta is a SP spectrum TCM soft (+ hard where applicable) component set to zero above trigger rapidity y_tt (comprising the void) as the condition, each normalized to the associated-particle number n̂_ch - 1 for that event class. The object of analysis for comparison with jet correlations is the per jet hard component of hard events A^*_hh. To obtain that quantity from TA data the appropriate nonjet elements of the TA TCM are subtracted from the TA data. A = F_ta / T̂ is first obtained from TA data using the TCM trigger spectrum. The associated hard component HH is then isolated by subtracting SS and HS TCM elements
P_h R_h A_hh = A - P_s R_s A_ss - P_h R_h A_hs,
where R_x ≡T̂_x / T̂. Two more steps are required to obtain A^*_hh (correlations from trigger jet only) from A_hh (correlations from soft and hard triggers and from both trigger jet and partner jet if present) as described in Ref. <cit.>.
Figure <ref> (left) shows the per jet hard component of TA correlations F_at in the form T̂_h y_ta A^*_hh for multiplicity class n = 5 from 200 GeV collisions. That result can be compared directly with the prediction in Fig. <ref> (left) derived from eventwise-reconstructed jet data. The z-axis limits are the same for the two plots. There is quantitative correspondence except near the low- boundary.
Figure <ref> (right) shows a projection of the left panel onto y_ta (solid points) that can be compared directly with projection D_a(y_assoc) (mean associated fragment distribution) from Fig. <ref> (right) (solid and dashed curves). Also plotted is the hard component from 200 GeV NSD collisions (open points) in the per jet form y_t,assoc H / f [also plotted in Fig. <ref> (right)] where f is the dijet η density per collision. For both SP spectra as in Fig. <ref> and more-complex TA correlations as in Figs. <ref> and <ref> there is a quantitative correspondence between eventwise-reconstructed jets and MB dijet manifestations in spectra and correlations. The correspondence extends down to at least p_t = 0.3 GeV/c, and most jet fragments (> 90%) appear below 2 GeV/c (y_t ≈ 3.3).
It is notable that trigger-associated cuts applied to data as in Fig. <ref> from Ref. <cit.> or Fig. <ref> from Ref. <cit.> correspond respectively to rectangles on (y_t,assoc,y_t,trig) in Fig. <ref> (left) bounded by ([3.3,y_t,trig],[4,4.5]) and ([1,y_t,trig],[4,4.5]) for the two cases. Such cuts include only a tiny fraction of correlated pairs and exclude the dominant TA peak near y_t = 2.7 (p_t ≈ 1 GeV/c). It has been argued that since pQCD calculations are valid only for “high-” fragments and energetic partons high- cuts are required for valid jet analysis. But the validity of pQCD calculations is not relevant to comparisons between MB jet phenomena and measured jet properties (FFs and jet spectra) as above.
§ JETS AND P_T FLUCTUATIONS
When the √(s_NN) = 17 GeV program commenced at the CERN super proton synchrotron it was expected that certain eventwise fluctuation measures might reflect thermodynamic properties of a quark-gluon plasma. In particular, eventwise mean- fluctuations might serve as a proxy for temperature fluctuations within a locally-thermalized plasma <cit.>. Exceptional fluctuations (i.e. deviations from independent particle emission) might signal proximity to a QCD phase boundary and reveal its nature. That concept continues to provide the principal context for recent fluctuation studies <cit.> but does not include possible contributions to fluctuations from MB jets that are considered below.
§.§ Fluctuation measures
Fluctuation measure definitions described below follow those presented in Ref. <cit.>. n̅_ch and p̅_t are event-ensemble means averaged over a detector acceptance. To reduce notation complexity n_ch↔ n where there is no ambiguity. Symbols Δσ^2_x denote per-particle variance differences that are negligible in case of central-limit conditions (CLT) wherein data are comprised of (a) independent samples from (b) a fixed parent distribution <cit.>. Deviations from the CLT may represent significant two-particle correlations. Extensive RVs n_ch and P_t represent eventwise integrals over some angular domain: a full detector acceptance or smaller bins within an acceptance as described in Ref. <cit.>. Overlines or bars represent ensemble means whereas angle brackets represent eventwise means. The charge-multiplicity variance is
σ^2_n = (n_ch - n̅_ch)^2≡n̅_ch(1 + Δσ^2_n), where Δσ^2_n represents a non-Poisson contribution from correlations.
Most measurements of “mean- fluctuations” are based on one of two statistical measures with different interpretations. If the emphasis is on eventwise mean represented by intensive ratio ⟨ p_t ⟩ = P_t / n_ch as a proxy for local temperature the conventional fluctuation measure is
σ^2_⟨ p_t ⟩ = (⟨ p_t ⟩ - p̅_t)^2≈ (σ^2_p_t + Δσ^2_⟨ p_t ⟩) / n̅_ch. Alternatively, a system based on extensive quantities is conditional measure σ^2_P_t|n = (P_t - n_chp̅_t)^2 = n̅_ch (σ^2_p_t + Δσ^2_P_t|n) where in each case Δσ^2_x represents a non-CLT contribution from two-particle correlations and/or dynamical fluctuations that may shed light on collision mechanisms.
In Ref. <cit.> several statistical measures are compared in reference to fluctuation measurements reported in Ref. <cit.>. An extensive differential measure is denoted by B̅ = σ^2_P_t|n - n̅_chσ^2_p_t with associated per-particle measure Δσ^2_P_t|n = B̅ / n̅_ch, both negligible for CLT conditions. In Ref. <cit.> the preferred intensive measure is C ≡B̅ / n_ch(n_ch-1)≈B̅ / n̅_ch^2. fluctuations are then reported in terms of the r.m.s. quantity √(C) / p̅_t ≈√(B̅ / P̅_t^2). One motivation for that construction may be that C ≈σ^2_⟨ p_t ⟩ - σ^2_p_t / n̅_ch, so √(C) / p̅_t may be interpreted as a fractional r.m.s. ⟨ p_t ⟩ fluctuation measure. However, that choice is based on several questionable assumptions, in particular that a local temperature is a relevant concept and that non thermodynamic sources (e.g. MB dijets) do not dominate collision dynamics. Per-pair measure C tends to decrease strongly with increasing system size (i.e. å collision centrality) other things being equal, which might suggest that thermalization increases with å centrality. In contrast, fluctuations of extensive measures n_ch and P_t can (and do) exhibit informative TCM trends (strong increases with å centrality) that are concealed by the intensive ratio ⟨ p_t ⟩ = P_t / n_ch.
§.§ A-A fluctuation data at 200 GeV and 2.76 TeV
Figure <ref> (left) shows fluctuation data for 2.76 TeV collisions from Ref. <cit.> in the form √(C) / p̅_t (which decreases monotonically with increasing centrality) transformed to (2/N_part) B̅ rather than B̅ / n̅_ch (a per-particle measure in terms of initial-state participant nucleons rather than final-state charged hadrons) plotted vs mean participant pathlength ν. The trend for more-peripheral collisions is consistent with a TCM GLS trend expected for dijet production in transparent å collisions (dashed line). In more-central collisions the data significantly exceed the GLS trend consistent with increased fragment yields from jet modification as noted for 200 GeV collisions in Fig. <ref> (a) and (c).
Figure <ref> (right) shows comparable TCM results for 200 GeV collisions reported in Ref. <cit.>. The general trend is similar but with reduced overall amplitude as expected given the log(s/s_0) collision-energy dependence of dijet production <cit.>. The dash-dotted curves are the GLS expectations extrapolated from isolated collisions. The dashed lines relate to production from “wounded” projectile nucleons after a first collision.
§.§ Inversion of fluctuation scaling at 200 GeV
Fluctuation data for a given collision system depend strongly (and possibly nonmonotonically) on the detector acceptance or bin size of a particular analysis as demonstrated in Ref. <cit.>. Results from different experiments are therefore not simply comparable. However, measurements of the scale (bin size) dependence of fluctuations are “portable” and may be inverted to infer underlying angular correlations as first demonstrated in Ref. <cit.>.
Figure <ref> (a) and (b) show angular correlations from peripheral and central 200 GeV collisions inferred from inversion of fluctuation scale dependence <cit.>. Model elements representing an AS 1D peak and nonjet quadrupole have been subtracted to isolate the SS 2D peak structure. The structure is equivalent to that in Figs. <ref> and <ref>, and those results appear to confirm that fluctuations are dominated by MB dijets. However, the complex inversion procedure could be questioned.
Figure <ref> (c) and (d) show angular correlations from the same collision systems determined directly by pair counting, not by inversion of fluctuation scaling. The results are in quantitative agreement with (a) and (b) confirming that dijets are the dominant source of fluctuations in high-energy nuclear collisions. There are significant differences due to reduced angular resolution of the inversion process, but the equivalence is clearly apparent. fluctuation data compel the conclusion that a local temperature is not relevant for fluctuation measurements and their interpretation in high-energy nuclear collisions, and direct correlation measurements by pair counting supply equivalent information more accurately.
§ DISCUSSION
As noted in the introduction, interpretation of high-energy collision data from more-central å collisions near midrapidity tends to follow one of two themes: (a) a flowing bulk medium identified as a QGP with unique properties or (b) a combination of two or three hadron production mechanisms including dijet production, albeit with jet modification increasing with centrality. Theme (a) assumes the existence of flows and local thermalization a priori and prefers methods and data interpretations that favor a flow narrative. Theme (b) assumes dijet production as the principal manifestation of QCD in high-energy nuclear collisions and uses measurements of eventwise-reconstructed (isolated) jets in elementary collisions to form expectations for other jet manifestations in and å collisions. It is not unreasonable to pursue both, but theme (b) should be engaged at least as thoroughly as (a) to insure a balanced scientific outcome.
To pursue theme (b) properly requires direct and quantitative comparisons of all available information on isolated jets in elementary collisions with all available information on MB dijets in nuclear collisions over the largest possible range of collision systems. Comparisons should be based on accurate isolation of jet-related hard components from other contributions to yields, spectra, correlations and fluctuations combined with extensive measures that preserve TCM trends, as demonstrated by various examples and cited references in the present study.
§.§ Universality of the TCM
The TCM represents a conceptually simple idea – two complementary mechanisms dominate hadron production near midrapidity consisting of projectile-nucleon dissociation (soft) and MB dijet production (hard). The TCM concept was first related to RHIC data fifteen years ago <cit.>. Elaboration of the TCM has proceeded in a number of subsequent publications (e.g. Refs. <cit.>). The TCM hard component in hadron yields, spectra, fluctuations and correlations from , and å collisions corresponds quantitatively to eventwise-reconstructed jet properties over a large range of multiplicity, å centrality, collision energies and hadron momenta. The general TCM pattern for extensive quantities X (e.g. n_ch or P_t) is
(1 /ρ̅_s) X = X_s + αρ̅_s X_h p-p
or
(2/N_part) X = X_s + ν X_h A-A,
where ρ̅_s is by hypothesis a proxy for the number of participant low-x partons (gluons) in collisions and N_part is the number of participant nucleons in å collisions according to the Glauber model of that collision system. When left-hand-side quantities are plotted vs ρ̅_s () or ν (å) the TCM signature is a constant (soft component) plus an approximately linear increase (jet-related hard component). Those trends are illustrated for X = integrated charge in Fig. <ref> (left) () or Fig. <ref> (å), for integrated P_t in Fig. <ref> (right) (), for P_t variance difference B̅ in Fig. <ref> (å) and for number of correlated hadron pairs Δρ in Fig. <ref> (a) and (c).
Intensive ratios tend to conceal jet-related TCM trends by partial cancellations.
§.§ Phenomenology of MB dijets
For MB analysis as described in this study all collision events and all hadrons within each collision are accepted for analysis. No conditions are imposed (except those that define a detector acceptance). All hadron fragments from dijets that enter the detector acceptance are therefore retained within the primary particle data. If properties of the MB dijet population for a given collision system are known (i.e. measured) the detected fragment system should correspond quantitatively to predictions derived from jet data as a critical test for any jet interpretations of data. An example is provided in Sec. <ref>.
Section <ref> summarizes a comprehensive description of properties of isolated jets in terms of FFs and jet (scattered-parton) energy spectra represented by simple and accurate parametrizations for an array of collision systems <cit.>. Those parametrizations permit quantitative comparisons with MB fragment data from nuclear collisions. For some analysis methods (e.g. those associated with the TCM) the comparisons demonstrate accurate correspondence (e.g. Refs. <cit.>). For other methods (some of those associated with the flow narrative) there is no clear pattern (e.g. Refs. <cit.>).
Descriptions of dijet contributions in terms of pQCD theory cannot represent MB dijets. Due to its inevitable limitations pQCD can represent less than 10% of jet fragments <cit.>. Next-to-leading-order (NLO) predictions of spectrum structure are usually constrained to p_t > 2 GeV/c with large systematic uncertainties near the lower bound (see the next subsection), whereas MB fragment distributions peak well below that point (Figs. <ref> and <ref>).
Biased representations of fragment data distributions that rely on model-dependent background subtraction, ad hoc cuts or intensive (spectrum or statistics) ratios represent only a distorted fraction of the MB fragment population. Comparisons of such biased data to eventwise-reconstructed jet properties is likely misleading and cannot test jet-related interpretations of data components.
§.§ Comparisons with NLO p_t spectrum predictions
A substantial effort has been expended to compare NLO pQCD theory predictions of inclusive hadron production with measured hadron spectra <cit.>. The form of such predictions is represented schematically by
d^2σ/dp_t dη = ∫dz/z∬ dx_1 dx_2 f_p_1^h_1(x_1) f_p_2^h_2(x_2)
× d^2σ̂(x_1,x_2,p̂_t)/dp̂_t dη D_p_3^h_3(z),
a convolution of projectile-hadron parton distribution functions (PDFs) f_p^h(x), QCD parton scattering cross section d^2σ̂(x_1,x_2,p̂_t)/dp̂_t dη and scattered-parton FF D_p^h(z). It is argued that comparisons of NLO predictions to inclusive spectra may test the role of jets in nuclear collisions, but there are problems with that argument.
Such NLO predictions (e.g. Ref. <cit.>) assume that almost all hadrons from collisions are the result of binary parton-parton collisions. But that assumption conflicts with the TCM inferred from 200 GeV collisions (Sec. <ref>) wherein most hadrons emerge from a (soft) process that does not scale with a number of binary collisions and is interpreted to represent dissociation of single projectile nucleons as the result of a soft interaction. Instead, Eq. (<ref>) corresponds to Eq. (<ref>) describing (within a constant factor) the spectrum hard component.
[Note that p̂_t is here the scattered-parton (jet) momentum, is a fragment momentum, z = p_t / p̂_t, dz/z = dln(z) = dy_max for given , y_max≡ln(2p̂_t / m_π) and ln(1/z) = y_max - y = ξ (conventional FF parameter). The assumption that FFs D(z) depend only on z or y_max - y is contradicted by FF data as in Fig. <ref>, and FFs have a universal form on u ≈ y/y_max <cit.>.]
Comparisons between NLO predictions and isolated spectrum hard components might provide a useful test of MB dijet production, but there are further issues.
Because of limitations on the applicability of perturbative methods NLO predictions are typically restricted to hadron p_t > 2 GeV/c (approximating p_t ≫ 1 GeV/c). But examples above and cited references demonstrate that most jet fragments (> 90%) appear in p_t < 2 GeV/c, and that fraction plays a dominant role in yields, spectra, fluctuations and correlations from and å collisions.
An alternate (perhaps the main) purpose for comparison of NLO predictions to hadron spectra is tests of QCD factorization and FF universality and optimizing FF parametrizations from global fits to , and or data. A recent review stipulates that NLO predictions for jet spectra combining PDFs with pQCD parton-parton cross sections describe the spectrum data well <cit.>. The major remaining uncertainty lies with current FFs, especially gluon-to-unidentified-hadron FFs whereby hadron spectra are typically overpredicted by a factor 2. It is proposed to reduce such errors by refinement of FFs through improved global fits to data.
However, one can question such a strategy, especially the assumption of FF universality as it is conventionally invoked. Section <ref> illustrates the large discrepancies between light-quark FFs derived from collisions and those inferred from collisions. The differences lie well outside systematic uncertainties. light-quark FFs used to describe the spectrum hard component already produce a large overprediction <cit.>. But low- hadrons that dominate the spectrum hard component near midrapidity arise mainly from low-x gluons. If gluon FFs are applied instead the overprediction is worse due to the increased (and softer) hadron yield from gluon jets. The approximate agreement between jet systematics and spectrum hard components as in Fig. <ref> is achieved only by using light-quark FFs <cit.>. The result suggests that there are substantial differences in FFs depending on the jet environment in collisions. Strong jet modification in å collisions is a notable result of the RHIC experimental program confirmed at the LHC. But jets appear to be modified significantly already in collisions relative to collisions. As a result, attempts to optimize “universal” FFs via global fits to multiple systems may produce a strongly-biased result.
Another issue for NLO comparisons is the structure of spectrum hard components below 5 GeV/c (e.g. Fig. <ref>). The maximum near 1 GeV for pions and protons <cit.> (and therefore possibly kaons and other hadrons) is probably outside the scope of pQCD. The evolution of jet-related structure with å centrality as in Sec. <ref> is certainly so, consistent with the conclusion that “...only the region above p_T≈ 10 GeV/c of these charged-hadron data, with theoretical scale uncertainties below ±20%, should be included in forthcoming global fits of parton-to-hadron fragmentation functions” <cit.>. However, the effectiveness of such global fits may still be questioned as noted above.
Three areas may then be distinguished: (a) comparison of measured isolated-jet characteristics with various MB jet manifestations in high-energy nuclear-collision data as described in the present study, (b) comparisons of PDF and pQCD parton-spectrum combinations with measured jet spectra as one test of QCD factorization and (c) improved measurement of scattered-parton FFs in elementary collisions with the understanding that FFs may depend on collision context – e.g. vs – as an extension of jet modification in å collisions.
§.§ Methods that misidentify or distort MB dijets
Section <ref> provides an example of analysis methods that tend to minimize and/or distort MB dijet manifestations in nuclear-collision data. Other methods tend to reveal contributions from MB dijets consistent with isolated-jet measurements as described in this study. A principal difference is the amount of information carried by primary particle data that is retained by a method and may be utilized for direct comparison with isolated jets.
Examples are provided below for integrated or P_t vs () or centrality (å), SP spectra vs or centrality, correlations, 2D angular correlations, fluctuations and choice of plot formats.
Integrated charge from å collisions may be plotted as (2/N_part) n_ch vs ν over the complete å centrality range to reveal a TCM trend with strong MB dijet contribution as in Fig. <ref>, varying from a GLS trend extrapolated from for more-peripheral å collisions to a trend reflecting strong jet modification for more-central collisions. Alternatively, the same per-participant quantity may be plotted vs N_part over a limited centrality interval (e.g. top 40% as in Ref. <cit.>), and the GLS extrapolation is then effectively concealed. Alternative nonjet hypotheses may seem to describe the more-central data but could be falsified by more-peripheral data (e.g. the CGC dashed trend in Fig. <ref>, left).
A full analysis of spectra vs or å centrality over a large acceptance may reveal all available information in spectrum data through inductive study as in Refs. <cit.> leading to or consistent with the TCM. Alternatively, spectra for single collision systems fitted over limited intervals with a priori model functions may appear to support nonjet interpretations (e.g. radial flow as in Ref. <cit.>). Studies that seem to emphasize jets by applying “high-” cuts may actually discard almost all evidence of MB jets (more than 90% of MB jet fragments).
Possible two-particle correlation studies include correlations as in Fig. <ref> (left) for various collision systems and several combinations of pair angular acceptance, and 2D angular correlations for various conditions including full pair acceptance <cit.>. The result in Fig. <ref> (left) provides compelling evidence for the TCM, is directly comparable to the SP spectrum TCM, and the hard-component peak centered near (2.7,2.7) corresponds directly to jet-related angular correlations as in Fig. <ref> (right). It is notable that correlations remain largely unexplored despite the essential information they convey.
2D angular correlations for a number of collision systems have been studied and characterized by a universal 2D fit model <cit.>. The results are quantitatively compatible with TCM results from other analysis including MB jet spectrum manifestations <cit.>. An important feature of 2D angular correlations is resolution of the jet-related SS 2D peak from other structure that may or may not be jet related. In contrast, projection of 2D angular correlations onto 1D azimuth discards information relating to η dependence and reduces the ability to distinguish jet-related structures from others.
In particular, the NJ quadrupole cannot be uniquely distinguished from the quadrupole component of the jet-related SS 2D peak. Some fraction of the latter may then be attributed to “elliptic flow”v_2 leading to distortion of nominal jet structure following ZYAM background subtraction <cit.> or attribution of all MB jet structure to flows including “triangular” <cit.> and “higher harmonic” <cit.> flows <cit.>.
[While some data features attributed to flows may represent MB dijet manifestations the NJ quadrupole is most likely a distinct nonjet phenomenon with a unique physical mechanism <cit.>.]
While extensive quantities reveal clear TCM trends (as in this study) intensive ratios of such quantities discard essential information in primary data by canceling data trends required to interpret collision data, especially MB jet manifestations. Examples include spectrum ratio R_AA that obscures contributions from MB jets below p_t ≈ 3 GeV/c including more than 99% of MB jet fragments, eventwise mean ⟨ p_t ⟩ = P_t / n_ch and corresponding ensemble mean p̅_t that partially obscure TCM trends in the extensive numerators and denominators, and ratio v_2(p_t) including NJ quadrupole spectrum (numerator) and SP hadron spectrum (denominator) that may represent different hadron populations confused by the v_2 ratio <cit.>. Extensive fluctuation measures (e.g. variance differences such as Δσ^2_P_t|n) convey almost all information on underlying correlations including MB jet manifestations, whereas intensive ratios of statistical quantities (or ratios of ratios such as √(C̅) / p̅_t) tend to suppress jet manifestations and may favor interpretations within a thermodynamic context (see Sec. <ref> for notation).
The choice of plot format, including independent-variable choice, may further suppress essential information. The choice of participant number N_part as å centrality measure over mean participant pathlength ν visually suppresses the more-peripheral part of centrality dependence, especially GLS trends extrapolated from collisions that provide a valuable reference for å collisions. The choice of linear over rapidity [or at least log(p_t)] visually suppresses the low- region where most MB jet fragments appear. Semilog plots showing model curves passing through spectrum data points are often misleading; highly significant deviations may appear much smaller than the plotted points (as are the statistical uncertainties). Plots of spectrum data/model ratios are similarly misleading because deviations at lower may be strongly suppressed. Fit quality is tested only by comparing [data - model] differences in ratio to statistical uncertainties, as demonstrated in Fig. <ref> (left).
§.§ Consequences for physical interpretation
Section <ref> notes that a specific analysis may rely on sequential selection from several alternative methods. A given combination of methods may then contribute to support of a preferred narrative whereas a different combination might falsify that narrative. A possible criterion for optimum selection is maximized use of the information carried by primary particle data. Another is insistence on consistent application and interpretation across a range of collision systems and measured quantities.
The present study demonstrates that one can assemble a combination of methods providing clear evidence for manifestations of MB dijets in all collision systems. Measure choices are guided by the principle that almost all available information from nuclear collision data is compared with almost all available information from isolated jets. No information is intentionally discarded. Measured MB jet manifestations then correspond quantitatively and consistently to properties of isolated jets established by separate experiments. Recognition of MB jet manifestations as such then precludes much of the evidence advanced to support flow interpretations.
In contrast, alternative combinations of methods can be assembled that minimize MB jet manifestations and appear to support a flow narrative to describe high-energy nuclear collisions (possibly even in small systems). However, as argued and demonstrated with examples in the present study, such preferred methods tend to discard major fractions of the information in primary particle data – in the form of projections to lower dimensions, cuts based on a preferred narrative, intensive ratios and nonoptimal plotting formats – and discard information in isolated jet data by invoking pQCD as an imposed intermediary context (a sort of filter). Established jet physics is thereby largely excluded from descriptions of high-energy nuclear collisions in favor of a flow narrative.
§ SUMMARY
Primary particle data from high-energy nuclear collisions may be processed with alternative classes of analysis methods that seem to support one of two narratives: (a) collisions are dominated by flows carried by a dense medium (quark-gluon plasma or QGP) or (b) collisions are dominated near midrapidity by two hadron production mechanisms consisting of projectile-nucleon dissociation and minimum-bias (MB) dijet production.
Because a data analysis system leading to physical interpretations typically relies on multiple selections from among several possible methods there is no unique method combination. Results of the present study suggest that flow-QGP interpretations tend to arise from a certain class A of analysis methods imposed a priori within a narrative context that exhibit substantial information discard. Interpretations based on MB dijets are favored by an alternative class B inferred inductively from primary particle data that retain most information.
This article presents a detailed study of measured manifestations of MB dijets in high-energy nuclear collisions and quantitative comparisons of those manifestations with measured properties of eventwise-reconstructed (isolated) jets. The study considers jet manifestations in yields, spectra, correlations and fluctuations from a range of collision systems. The MB dijet system is then used to evaluate properties of alternative analysis methods and the quality of support for the two competing narratives.
The two-component (soft + hard) model (TCM) of hadron production near midrapidity provides a context for MB dijets and narrative (b). The TCM as applied in this work was derived inductively from the dependence of spectra. The soft and hard components of yields, spectra, correlations and fluctuations are observed to evolve independently with charge multiplicity or å centrality. For each form of primary data the TCM hard component is quantitatively related to measured isolated-jet properties. For instance, evolution of the spectrum hard component with collision energy tracks quantitatively with measured isolated-jet spectra. Evolution of hadron yields with å centrality and collision energy follows simple TCM trends on centrality and a QCD trend log(s / s_0) on collision energy with √(s_0)≈ 10 GeV. The observed TCM simplicity applies to methods based on extensive variables such as and P_t integrated over some acceptance and supports narrative (b).
Alternative methods that appear to support narrative (a) rely on intensive ratios (or ratios of ratios), a priori imposed fit models, projections from higher- to lower-dimensional spaces, imposed cuts and subtraction of ad hoc backgrounds that tend to discard essential information carried by primary particle data and may then lead to biased and distorted results. Those conclusions are based on manifestations of MB dijets using such methods in quantitative comparison to the measured properties of isolated jets. A number of examples are provided in this study. The overall results suggest that at least some data features interpreted to arise from flows actually represent MB dijet manifestations.
I conclude that the TCM for high-energy nuclear collisions emerges naturally from inductive analysis of yield, spectrum and correlation data, is not imposed a priori. The TCM hard component agrees quantitatively with the measured properties of isolated jets. Statistical measures based on extensive variables retain substantially more of the information carried by primary particle data. When various data features are reexamined in the context of isolated-jet measurements little substantial evidence remains to support a flow narrative. And any description of high-energy nuclear collisions that omits a clear, quantitative description of MB dijets consistent across all measures and collision systems may be questioned.
99keystone D. Teaney, J. Lauret and E. V. Shuryak,
Phys. Rev. Lett. 86, 4783 (2001).
multipoles T. A. Trainor,
J. Phys. G 40, 055104 (2013).
hydro P. Huovinen and P. V. Ruuskanen,
Ann. Rev. Nucl. Part. Sci. 56, 163 (2006).
perfect M. Gyulassy and L. McLerran,
Nucl. Phys. A 750, 30 (2005).
qgp1 T. Hirano and M. Gyulassy,
Nucl. Phys. A 769, 71 (2006).
qgp2 L. P. Csernai, J. I. Kapusta and L. D. McLerran,
Phys. Rev. Lett. 97, 152303 (2006).
nonflow A. Bilandzic, N. van der Kolk, J. Y. Ollitrault and R. Snellings,
Phys. Rev. C 83, 014909 (2011).
freezeout F. Becattini, M. Bleicher, T. Kollegger, M. Mitrovski, T. Schuster and R. Stock,
Phys. Rev. C 85, 044921 (2012).
starpidspec B. I. Abelev et al. (STAR Collaboration),
Phys. Rev. C 79, 034909 (2009).
starprl J. Adams et al. (STAR Collaboration),
Phys. Rev. Lett. 95, 152301 (2005)
recombo1 R. C. Hwa et al.,
Phys. Rev. C 70, 024905 (2004).
recombo2 R. J. Fries et al.,
Phys. Rev. C 68, 044902 (2003).
luzum M. Luzum,
Phys. Lett. B 696, 499-504 (2011).
alicempt B. B. Abelev et al. (ALICE Collaboration),
Phys. Lett. B 727, 371 (2013).
aliceptfluct B. B. Abelev et al. (ALICE Collaboration),
Eur. Phys. J. C 74, 3077 (2014).
raav21 J. Adams et al. (STAR Collaboration),
Phys. Rev. Lett. 91, 072304 (2003).
eeprd T. A. Trainor and D. T. Kettler,
Phys. Rev. D 74, 034012 (2006).
tasso W. Braunschweig et al. (TASSO Collaboration),
Z. Phys. C 47, 187 (1990).
opal M. Z. Akrawy et al. (OPAL Collaboration)
Phys. Lett. B, 247, 617 (1990).
ua1jets C. Albajar et al. (UA1 Collaboration),
Nucl. Phys. B 309, 405 (1988).
ua2jets J. Alitti et al. (UA2 Collaboration),
Phys. Lett. B 257, 232 (1991).
isrfirstjets T. Akesson et al. (Axial Field Spectrometer Collaboration),
Phys. Lett. B 123, 133 (1983).
fragevo T. A. Trainor,
Phys. Rev. C 80, 044901 (2009).
inverse T. A. Trainor, R. J. Porter and D. J. Prindle,
J. Phys. G 31, 809 (2005).
ptscale J. Adams et al. (STAR Collaboration),
J. Phys. G 32, L37 (2006).
starraa J. Adams et al. (STAR Collaboration),
Phys. Rev. Lett. 91, 172302 (2003).
quadspec2 T. A. Trainor,
arXiv:1610.06256.
ollitrault J. Y. Ollitrault,
Phys. Rev. D 46, 229 (1992).
snellings R. Snellings,
New J. Phys. 13, 055008 (2011).
ppprd J. Adams et al. (STAR Collaboration),
Phys. Rev. D 74, 032006 (2006).
wilk G. Wilk and Z. Wlodarczyk,
Phys. Rev. Lett. 84, 2770 (2000).
alicespec T. A. Trainor,
arXiv:1603.01337.
ppquad T. A. Trainor and D. J. Prindle,
Phys. Rev. D 93, 014031 (2016).
cywong C. Y. Wong and G. Wilk,
Phys. Rev. D 87, no. 11, 114007 (2013).
ua1spec C. Albajar et al. (UA1 Collaboration),
Nucl. Phys. B 335, 261 (1990).
tsallisblast Z. Tang, Y. Xu, L. Ruan, G. van Buren, F. Wang and Z. Xu,
Phys. Rev. C 79, 051901 (2009).
jetspec2 T. A. Trainor,
Phys. Rev. D 89, 094011 (2014).
cdfff D. Acosta et al. (CDF Collaboration),
Phys. Rev. D 68, 012003 (2003).
ua5 G. J. Alner et al. (UA5 Collaboration),
Z. Phys. C 32, 153 (1986).
cdfjets D. Acosta et al. (CDF Collaboration), Phys. Rev. D 68,
012003 (2003).
hardspec T. A. Trainor,
Int. J. Mod. Phys. E 17, 1499 (2008).
alicetommpt T. A. Trainor,
Phys. Rev. C 90, no. 2, 024909 (2014).
phenix T. A. Trainor,
Phys. Rev. C 91, 044905 (2015).
anomalous G. Agakishiev, et al. (STAR Collaboration),
Phys. Rev. C 86, 064902 (2012).
barmeson B. I. Abelev et al. (STAR Collaboration),
Phys. Rev. Lett. 97, 152301 (2006).
jetspec T. A. Trainor and D. T. Kettler,
Phys. Rev. C 83, 034903 (2011).
kn D. Kharzeev and M. Nardi,
Phys. Lett. B 507, 121 (2001).
glasmatom T. A. Trainor,
J. Phys. G 39, 095102 (2012).
alicemult K. Aamodt et al. (ALICE Collaboration),
Phys. Rev. Lett. 106, 032301 (2011).
uliss E. Schnedermann, J. Sollfrank and U. W. Heinz,
Phys. Rev. C 48, 2462 (1993).
starblast B. I. Abelev et al. (STAR Collaboration),
Phys. Rev. C 79, 034909 (2009).
nohydro T. A. Trainor,
J. Phys. G 37, 085004 (2010).
davidhq D. T. Kettler (STAR collaboration),
Eur. Phys. J. C 62, 175 (2009).
davidhq2 D. Kettler (STAR Collaboration),
J. Phys. Conf. Ser. 270, 012058 (2011).
porter3 R. J. Porter and T. A. Trainor (STAR Collaboration),
PoS C FRNC2006, 004 (2006).
porter2 R. J. Porter and T. A. Trainor (STAR Collaboration),
J. Phys. Conf. Ser. 27, 98 (2005).
axialci J. Adams et al. (STAR Collaboration),
Phys. Rev. C 73, 064907 (2006).
v2ptb D. T. Kettler, D. J. Prindle and T. A. Trainor,
Phys. Rev. C 91, 064910 (2015).
tzyam T. A. Trainor,
Phys. Rev. C 81, 014905 (2010).
bayes M. B. De Kock, H. C. Eggers and T. A. Trainor,
Phys. Rev. C 92, no. 3, 034908 (2015).
jetcorr T. A. Trainor,
J. Phys. G 42, no. 8, 085105 (2015).
tacorrexp T. A. Trainor and D. J. Prindle,
Phys. Rev. D 88, no. 9, 094018 (2013).
na49ptfluct H. Appelshauser et al. (NA49 Collaboration),
Phys. Lett. B 459, 679 (1999).
aliceptflucttom T. A. Trainor,
Phys. Rev. C 92, 024915 (2015).
clt T. A. Trainor,
hep-ph/0001148.
alicespec2 B. B. Abelev et al. (ALICE Collaboration),
Phys. Lett. B 736, 196 (2014).
noelliptic T. A. Trainor, D. T. Kettler, D. J. Prindle and R. L. Ray,
J. Phys. G 42, 025102 (2015).
nlorefs R. Sassot, P. Zurita and M. Stratmann,
Phys. Rev. D 82, 074011 (2010).
nlospectra D. d'Enterria, K. J. Eskola, I. Helenius and H. Paukkunen,
Nucl. Phys. B 883, 615 (2014).
global3 B. B. Back et al. (PHOBOS Collaboration),
Phys. Rev. C 65, 061901 (2002).
triangular B. Alver and G. Roland,
Phys. Rev. C 81, 054905 (2010)
Erratum: [Phys. Rev. C 82, 039903 (2010)].
higherharm T. A. Trainor, D. J. Prindle and R. L. Ray,
Phys. Rev. C 86, 064905 (2012).
quadspec T. A. Trainor,
Phys. Rev. C 78, 064908 (2008).
gluequad T. A. Trainor,
Mod. Phys. Lett. A 23, 569 (2008).
|
http://arxiv.org/abs/1701.08066v1 | 20170127143920 | Deconvolution of the energy loss function of the KATRIN experiment | [
"Volker Hannen",
"Irina Heese",
"Anna Sejersen Riis",
"Kathrin Valerius",
"Christian Weinheimer"
] | physics.ins-det | [
"physics.ins-det",
"nucl-ex"
] |
hannen@uni-muenster.de
Institut für Kernphysik, Westfälische Wilhelms-Universität Münster, Wilhelm-Klemm-Str. 9, 48149 Münster, Germany
Department of Physics and Astronomy, University of Aarhus, Ny Munkegade, DK-8000 Aarhus C, Denmark
Institut für Kernphysik, Karlsruher Institut für Technologie, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen, Germany
The KATRIN experiment aims at a direct and model independent determination of the neutrino mass with 0.2 eV/c^2 sensitivity (at 90% C.L.) via a measurement of the endpoint region of the tritium beta-decay spectrum. The main components of the experiment are a windowless gaseous tritium source (WGTS), differential and cryogenic pumping sections and a tandem of a pre- and a main-spectrometer, applying the concept of magnetic adiabatic collimation with an electrostatic retardation potential to analyze the energy of beta decay electrons and to guide electrons passing the filter onto a segmented silicon PIN detector.
One of the important systematic uncertainties of such an experiment are due to energy losses of -decay electrons by elastic and inelastic scattering off tritium molecules within the source volume which alter the shape of the measured spectrum. To correct for these effects an independent measurement of the corresponding energy loss function is required. In this work we describe a deconvolution method to extract the energy loss function from measurements of the response function of the experiment at different column densities of the WGTS using a monoenergetic electron source.
Neutrino mass, electron scattering, deconvolution
§ INTRODUCTION
The KArlsruhe TRItium Neutrino (KATRIN) experiment aims at determining the neutrino mass in a model independent way from the kinematics of tritium -decay. The observable in this case is an ”average electron anti-neutrino mass” given by the incoherent sum of neutrino mass eigenstates weighted by the squared elements of the mixing matrix.
The experiment combines a Windowless Gaseous Tritium Source (WGTS) and a high resolution electrostatic retarding spectrometer (MAC-E filter) to measure the spectral shape of -decay electrons close to the endpoint energy at 18.6 keV with an unprecedented precision. KATRIN's sensitivity to the neutrino mass will be 0.2 eV/c^2 (at 90% C.L.) after 3 years worth of data taking <cit.>.
An observed mass signal of 0.35 eV/c^2 will have a 5σ significance at the expected level of statistic and systematic uncertainties.
In order to reach the desired sensitivity, all systematic effects of the measurement must be well under control with the major systematic uncertainties being allowed to contribute no more than Δ m^2 = 0.0075 eV^2/ c^4 to the systematic error budget.
An overview of the KATRIN experiment is shown in figure <ref>. The experiment starts with the WGTS where
molecular gas is injected at the center of the source and removed at both ends by turbo-molecular pumps. The gas is kept at a constant temperature of 30 K within the source that is operated at a column density of 5· 10^17 cm^-2. The operational parameters of the source cryostat are monitored by a complex sensor network and a dedicated calibration and monitoring section at the rear of the source system <cit.>.
About 10^10 -decay electrons are emitted per second into the accepted forward solid angle with pitch angles less than θ_max = 51^∘ and are guided magnetically through the transport section to the spectrometer tandem consisting of pre- and main-spectrometers. The task of the transport section made up of a differential pumping section and a cryo-pumping section is to suppress the flow of molecules into the direction of the spectrometers by at least a factor of 10^14 in order to reduce experimental background from tritium decays within the spectrometers. A first energy discrimination is performed by the pre-spectrometer which rejects the low energy part of the spectrum (up to 300 eV below the endpoint) and thereby reduces the rate of electrons going into the main spectrometer to approximately 10^3 s^-1. Like the pre-spectrometer the main spectrometer operates as a so-called MAC-E filter <cit.> and has the task to perform a precise energy analysis of the decay electrons.
In a MAC-E filter electrons are guided magnetically against an electrostatic retardation potential that can only be surpassed by electrons with sufficiently high longitudinal energy with respect to the electric field. Here the longitudinal energy is given by E_∥ = E_ kin·cos^2θ with E_ kin being the kinetic energy of the electron and θ being the angle between electron momentum and magnetic field direction. The transverse energy is accordingly given by E_⊥ = E_ kin·sin^2θ.
The spectrometer acts as a high pass filter with a transmission function describing the observed electron rate as a function of the electron surplus energy (see section <ref>). To reduce the amount of transversal energy of the electrons that is not analyzed by the spectrometer, the technique of magnetic adiabatic collimation is used. The idea is that the magnetic guiding field drops by several orders of magnitude from the entrance of the spectrometer to the analyzing plane, where the electric potential reaches its maximum. If the gradient of the magnetic field is small enough, such that the field is approximately constant along one cyclotron loop of the electron movement, the magnetic moment of the cyclotron motion μ = E_⊥ / B (non-relativistic) is constant, and as B drops the transversal energy of the electrons is converted into longitudinal energy E_∥ that can be analyzed by the spectrometer.
By varying the electric potential of the spectrometer it is then possible to scan the relevant region around the endpoint energy of tritium -decay and accumulate a spectrum.
Electrons with sufficient energy to pass the spectrometer are finally detected by a 148 pixel silicon PIN detector <cit.> at the end of the setup.
Among the main systematical uncertainties of the experiment are energy losses from inelastic scattering of electrons in the source, fluctuations of the source density, fluctuations of the spectrometer analyzing potential, uncertainties in the transmission function and uncertainties in the final state distribution of the daughter molecules left after the decay reaction. A sophisticated calibration and monitoring system is being set up to keep the aforementioned systematic effects under control.
While there is some information on the energy loss of 18.6 keV electrons in gaseous tritium or quench condensed deuterium from the former neutrino mass experiments in Troitsk and Mainz <cit.>, precise experimental information on energy losses of electrons with energies near the endpoint of the tritium spectrum are only available for molecular hydrogen as target gas <cit.>.
A measurement of the energy differential scattering cross section of 18.6 keV electrons off molecular tritium is therefore highly desirable. Such a measurement can be performed using a monoenergetic source of electrons mounted upstream of the WGTS to determine the response function of the overall experiment at different column densities of the source. A deconvolution method suitable to extract the energy loss function from the data will be presented in the following sections.
Once the energy loss function for tritium is known with sufficient accuracy, the same measurement setup can be used for an independent check of the column density of the WGTS during intervals between the regular measurement cycles of the KATRIN experiment <cit.>.
§ ENERGY LOSS FUNCTION
The processes contributing to the energy loss of electrons traversing the molecular tritium gas within the WGTS are excitation of rotational and vibrational states of the T_2 molecules, excitation of electronic molecular states, dissociation and ionization of the molecules.
Aseev et al. <cit.> report on measurements of energy losses of electrons in gaseous tritium and in quench condensed deuterium films. Because of the limited energy resolution of a few eV the shape of the energy loss spectrum was not directly extracted from the data in their analysis, but approximated by a Gaussian representing electronic excitations and dissociation and a one-sided Lorentzian curve representing the continuum caused
by ionization of the molecules. The parameters of the two functions were then adapted to fit the observed integral energy spectra obtained with an 18.6 keV mono-energetic electron source for gaseous tritium or from 17.8 keV mono-energetic conversion electrons from a ^83 mKr film covered by various thicknesses of D_2 absorbers. In both cases, energy losses caused by rotational and vibrational excitations of the molecules without electronic excitation could not be resolved and were neglected.
More detailed information is available for the scattering of 25 keV electrons from molecular hydrogen gas <cit.> where direct measurements of the energy loss function with resolutions down to 40 meV have been performed.
This information about the scattering of electrons from molecular hydrogen has been implemented into a computer code by F. Glück <cit.> that can be used in simulations to generate energy losses Δ E and scattering angles Δφ in individual scattering events. The spectral shape produced with this routine is shown in figure <ref>.
It is used in a toy Monte Carlo simulation of the WGTS to evaluate the deconvolution methods described in the following sections.
The probability for an electron of kinetic energy E to lose a specific amount of energy Δ E in a single scattering event is described by the differential energy loss function dσ/ dΔ E. For our purpose, we normalize the function by the total inelastic scattering cross section σ_ tot, obtaining[The integral over the energy losses runs up to E/2 since the incoming electron and the secondary electron in an ionisation process (assuming E is larger than twice the ionisation energy) are identical quantum particles.]
f(Δ E) = 1/σ_ tot· dσ/ dΔ E with ∫^E/2_0 f(Δ E) dΔ E = 1 .
The total inelastic scattering cross section for 18.6 keV electrons off gaseous tritium is given by σ_ tot( T_2) = (3.40 ± 0.07)· 10^-18 cm^2 <cit.>.
The above mentioned code by F. Glück <cit.> for scattering of 18.6 keV electrons off hydrogen gives a total inelastic cross section of σ_ tot( H_2) = 3.7 · 10^-18 cm^2.
§ DECONVOLUTION METHOD
In the following sections we describe suitable mathematical methods to extract the energy loss function of 18.6 keV electrons in gaseous tritium from a series of measurements of the overall response function of the experiment at different column densities of the WGTS.
§.§ Response function
Figure <ref> displays simulated response functions of the KATRIN experiment at different column densities ρ d = 0 and
ρ_1 d < ρ_2 d < ρ_3 d assuming a mono-energetic electron source with narrow angular emission characteristics, i.e. pitch angles w.r.t. the magnetic field lines θ≤ O(1^∘).
Shown is the transmission probability as a function of the nominal surplus energy E_s = E - qU of the electrons, i.e., the difference between the setpoint energy E of the electron source and the retardation potential qU of the main spectrometer.
The response function at non-zero column density of the tritium source is given by a summation over contributions corresponding to n-fold (i.e. no, single, double, etc.) scattering of the electrons within the tritium source, weighted by the probabilities P_n for n-fold scattering
R(E_s) = P_0 · T_e(E_s) + P_1 · T_e(E_s) ⊗ f(Δ E) +
P_2 · T_e(E_s) ⊗ f(Δ E) ⊗ f(Δ E) + … .
Here T_e(E_s) is the experimental transmission function of the main spectrometer for the given electron source and f(Δ E) the sought after energy loss function neglecting small scattering angles.
Given that we only consider a small energy interval of up to 50 eV below the endpoint energy of 18.6 keV of the spectrum, the dependence of the energy loss function f(Δ E) on the kinetic energy of the electrons can be neglected.
The scattering probabilities are normalized such that ∑_n=0^∞ P_n = 1.
The experimental transmission function T_e(E_s) is determined from a measurement of the response function R(E_s) with an empty tritium source and hence without scattering. It is then equal to the analytical transmission function of the spectrometer T(E_s) convolved with a function S_e describing the energy spread and angular distribution of the electron source
T_e(E_s) = . R(E_s)|_ρ d=0 = T(E_s) ⊗ S_e .
In the simulations, the energy spread of the source is described by a Gaussian smearing of the energy setpoint with a width of σ_e = 0.2 eV. It is assumed that the electron source has a small angular divergence with starting angles θ_e ≤ 0.5^∘ := θ_e,max relative to the magnetic field direction at its location at the rear end of the WGTS. Within this narrow cone, the emission angles of the electrons are assumed to be isotropically distributed.
Suitable electron sources that emit single electrons at adjustable total energy and adjustable emission angle have been developed and tested within the KATRIN collaboration <cit.>.
The above mentioned numerical values for the energy and angular spread are compatible with the characteristics of the photo-electron source used during the commissioning of the KATRIN main spectrometer <cit.>.
The analytical transmission function of the spectrometer T(E_s) for such a source is given by the following relation <cit.>
T(E_s) = {[ 0 for E < qU; 1 - √(1 - E-qU/EB_e/B_A)/1 - √(1 - E_⊥,A, max/EB_e/B_A) for qU ≤ E ≤ qU + E_⊥,A, max; 1 for qU + E_⊥,A, max < E, ].
where B_e is the magnetic field at the electron source, B_A the magnetic field at the analyzing plane of the spectrometer and E_⊥,A, max = Esin^2(θ_e, max)B_A/B_e is the maximum remaining transversal energy component in the analyzing plane.
Defining
ϵ_0(E_s) = T_e(E_s)
ϵ_1(E_s) = T_e(E_s) ⊗ f(Δ E)
ϵ_2(E_s) = T_e(E_s) ⊗ f(Δ E) ⊗ f(Δ E)
…
as the n-fold scattering functions we can rewrite equation <ref> to obtain
R(E_s)= P_0 ·ϵ_0(E_s) + P_1 ·ϵ_1(E_s) + P_2 ·ϵ_2(E_s) + … .
If we manage to determine the single scattering function ϵ_1(E_s) from measured response functions we can, with the knowledge of T_e(E_s), extract the energy loss function f(Δ E) using suitable deconvolution methods.
§.§ Scattering probabilities
The mean free path of the electrons within the tritium gas inside the WGTS can be expressed in terms of a mean free column density (ρ d)_ free which the electrons pass before an interaction and which is calculated taking the inverse of the total scattering cross section (ρ d)_ free = 1/σ_ tot.
The actual column density seen by an electron traversing the WGTS at an angle θ relative to the symmetry axis, i.e. the magnetic field axis, is given by ρ d / cosθ. Neglecting possible scattering angles Δφ in the scattering processes for the moment, the mean number of expected scatterings is
μ(θ) = ρ d/(ρ d)_ freecosθ = ρ d σ_ tot/cosθ
= μ_0/cosθ .
The probability for an n-fold scattering is given by a Poissonian distribution:
P_n(μ(θ)) = μ^n(θ)/n!exp(-μ(θ)) with n = 0, 1, 2, … .
We have to take into account that electrons generated by the electron source follow an angular distribution which is, for our purpose, assumed to be isotropic within a narrow interval between 0^∘≤θ_e ≤θ_e, max.
If we further take into account that the magnetic field at the location of the electron source will be lower than the field within the WGTS, the starting angles have to be transformed according to
θ = arcsin( sinθ_e ·√(B_ WGTS/B_e)) ≈θ_e ·√(B_ WGTS/B_e) ,
resulting in angles θ of electron momenta relative to the magnetic field direction within the WGTS.
To obtain average scattering probabilities we weigh the values from equation <ref> with g(θ) = sinθ, corresponding to the probability function of an isotropic distribution, integrate over the given range of angles and normalize
P_n(μ_0) = ∫^θ_ max_0 g(θ) P_n(μ(θ)) dθ / ∫^θ_ max_0 g(θ) dθ
= ∫^θ_ max_0 sinθ·(μ_0/cosθ)^n/n!exp(-μ_0/cosθ) dθ /
(1 - cosθ_ max) .
Equation <ref> delivers approximate values for the scattering probabilities, as it does not take into account changes in the direction of the electrons during scatterings. Secondly this equation also assumes a homogeneous distribution of tritium molecules in transverse direction within the WGTS.
Compared to scattering probabilities extracted from the simulations accounting for scattering angles in elastic and inelastic scattering described in section <ref> the deviations to the results calculated with <ref> were found to be on the <10^-3 level, however. The small difference is due to the fact, that the scattering angles for inelastic scattering in our energy loss range of interest (Δ E < 50 eV) are strongly forward peaked with a mean of Δφ=0.5^∘ and elastic scattering is a subdominant process (see figure <ref>, right).
To obtain more precise values for the scattering probabilities, a detailed simulation using a 3-dimensional description of the column density within the WGTS is required, which is beyond the scope of this paper.
§.§ Extraction of the single scattering function
In order to determine the single scattering function ϵ_1(E_s) we have to perform measurements of the response function R(E_s) at different column densities. Neglecting multiple scattering events with more than three interactions of the electrons with the tritium gas inside the WGTS[The probability of a 4-fold scattering process with the maximum energy loss under consideration Δ E < 50 eV is below 8% at the maximum column density of 5· 10^17 cm^-2 used in the simulations.], we can set up a system of linear equations for measurements at three column densities labeled a, b and c:
R^a(E_s) - P_0^a · T_e(E_s) = P_1^a ·ϵ_1(E_s) + P_2^a ·ϵ_2(E_s) + P_3^a ·ϵ_3(E_s) ,
R^b(E_s) - P_0^b · T_e(E_s) = P_1^b ·ϵ_1(E_s) + P_2^b ·ϵ_2(E_s) + P_3^b ·ϵ_3(E_s) ,
R^c(E_s) - P_0^c · T_e(E_s) = P_1^c ·ϵ_1(E_s) + P_2^c ·ϵ_2(E_s) + P_3^c ·ϵ_3(E_s) ,
which we can write as a matrix equation:
R⃗ - P⃗_0 · T_e(E_s) = 𝐏·ϵ⃗ with 𝐏 = ([ P_1^a P_2^a P_3^a; P_1^b P_2^b P_3^b; P_1^c P_2^c P_3^c ]) .
Taking higher scattering orders into account would require additional measurements at further non-zero column densities and would increase the dimension of the system of linear equations to be solved. Whether the inclusion of only three scattering orders provides sufficiently accurate results will be evaluated in section <ref>.
Multiplying with the inverse of 𝐏, which is calculated using the Gauss-Jordan algorithm from the ROOT software package <cit.>, we obtain
ϵ⃗ = 𝐏^-1·(R⃗ - P⃗_0 · T_e(E_s))
from which we can calculate the single scattering function ϵ_1(E_s).
§.§ Deconvolution of the energy loss function
As described in section <ref>, the single scattering function is the result of the convolution of the experimental transmission function of the spectrometer with the energy loss function. This convolution is calculated taking the integral
ϵ_1 (E_s) = T_e(E_s) ⊗ f(Δ E) = ∫^E/2_0 T_e(E_s-Δ E)f(Δ E) dΔ E .
In our case, where the values of the functions in question are only known at N equally distributed discrete measurement points defined by the applied retardation voltage U_i, the integral is replaced by a sum
ϵ_1 (E-qU_i) = ∑^N-1_j=0 T_e(E-qU_i-Δ E_j)f(Δ E_j) .
The latter equation can be rewritten in N × N matrix form
ϵ⃗_⃗1⃗ = 𝐓_𝐞·f⃗ ,
where the 𝐓_𝐞 matrix is constructed from the discrete transmission function T_e(E_s,i = E-qU_i) as[The zeroes in the right upper corner of 𝐓_𝐞 are caused by the transmission condition E-qU_i-Δ E_j ≥ 0.]
𝐓_𝐞 = ( [ T_e(E_s,0) 0 ⋯ 0; T_e(E_s,1) T_e(E_s,0) 0 ⋯ 0; T_e(E_s,2) T_e(E_s,1) T_e(E_s,0) 0 ⋯ 0; ⋮ ⋮ ⋮ ⋮ ⋮; T_e(E_s,N-1) T_e(E_s,N-2) ⋯ T_e(E_s,0) ]) .
One could now try to solve equation <ref> by multiplying with the inverse of the 𝐓_𝐞 matrix. The latter, however, is close to being singular and cannot easily be inverted numerically. We therefore have to apply more sophisticated methods to deconvolve the energy loss function from the matrix equation. In the following two methods are applied to the problem: the so-called Singular Value Decomposition (SVD) <cit.> and the iterative Stabilized Biconjugate Gradient method <cit.>.
§.§.§ Singular Value Decomposition
The Singular Value Decomposition (SVD) is a method to deal with systems of linear equations given by a matrix equation 𝐀·x⃗ = b⃗ that are either singular or numerically very close to singular and is able to provide useful, although not necessarily unambiguous, solutions to the given problem.
It is based on the theorem that any M× N matrix 𝐀 with M≥ N can be written as a product of an M× N column-orthogonal matrix 𝐔, an N× N diagonal matrix 𝐖 whose elements are the so-called singular values w_i ≥ 0, and the transpose of an N× N orthogonal matrix 𝐕 <cit.>:
𝐀 = 𝐔·𝐖·𝐕^𝐓 =
𝐔·( [ w_1 0; w_2; …; 0 w_N ]) ·𝐕^𝐓 .
As 𝐔 and 𝐕 are orthogonal the inverse of equation <ref> can be written as
𝐀^-1 = 𝐕·𝐖^-1·𝐔^𝐓 = 𝐕· [diag(1/w_i)] ·𝐔^𝐓 .
Problems arise when some of the singular values w_i are either zero or so small that their values are dominated by numerical rounding errors. Using the SVD method it is still possible to construct an approximate solution vector x⃗ that will minimize the residual r given by
r ≡ |𝐀·x⃗ - b⃗| .
For that purpose all diagonal elements of 𝐖^-1 where the singular values w_i are below a chosen threshold value w_ thr are set to zero, thereby removing infinite or problematically large matrix elements. The matrix constructed using the modified
diagonal matrix 𝐖̃^-1 is the so-called pseudoinverse matrix Ã^-1, and the solution vector x⃗ is then given by
x⃗≈Ã^-1·b⃗ = 𝐕·𝐖̃^-1·𝐔^𝐓·b⃗ ,
which, translated to our original problem of deconvoluting the energy loss function from the measured single scattering function (see equation <ref>), becomes
f⃗≈𝐓̃_𝐞^-1·ϵ⃗_⃗1⃗ = 𝐕·𝐖̃^-1·𝐔^𝐓·ϵ⃗_⃗1⃗ .
What remains to be settled is the optimal threshold value w_ thres for suppression of the problematic singular values.
This can only be determined by investigating the influence of the deconvolved energy loss function on the extracted neutrino mass values in simulated neutrino mass runs of the KATRIN experiment. Such a study, applying a toy Monte Carlo simulation of the experiment, is presented in section <ref>.
§.§.§ Stabilized Biconjugate Gradient method
As an alternative to the SVD method we tested the so called Stabilized Biconjugate Gradient (Bi-CGSTAB) method described by Sleijpen and Fokkema <cit.>.
The bi-conjugate gradient method iteratively solves linear sets of equations 𝐀·x⃗ = b⃗ where 𝐀 is an N× N matrix. In each iteration step, labeled k, the approximate solution x⃗_k is modified by some search correction that depends on the true residual r⃗_k = b⃗ - 𝐀·x⃗_k and some “shadow residual” r⃗̃⃗_⃗k⃗
calculated using the transpose 𝐀^T.
The residuals are forced to converge by making r⃗_k orthogonal to the shadow residuals r⃗̃⃗_⃗j⃗ for j < k.
For an in-depth description of the algorithm we refer to reference <cit.>. In our simulations we use the implementation of the Bi-CGSTAB algorithm provided as part of the Meep software package for finite-difference time-domain simulations developed at the Massachusetts Institute of Technology (MIT) <cit.>. As for the SVD method, the energy loss function resulting from a deconvolution using the Bi-CGSTAB algorithm is evaluated in a toy Monte Carlo simulation of the experiment, as presented in section <ref>.
§ EVALUATION USING TOY MONTE CARLO SIMULATION
In order to test the deconvolution methods described in section <ref>, we performed Monte Carlo simulations of response function measurements at different column densities. In these simulations we assumed a perfectly homogeneous gas distribution within the WGTS and used the model of F. Glück <cit.> to generate energy losses in scattering events of 18.6 keV electrons with molecular hydrogen. Table <ref> provides an overview of the input parameters of the simulation.
The simulated response functions are displayed in figure <ref> (left). From these response functions we can
extract the n-fold scattering functions as described in section <ref> with the results shown in figure <ref> (right). The single scattering function ϵ_1(E_s) is then the input for either one of the two deconvolution methods described in section <ref>.
§.§ SVD results
To optimize the result obtained for the energy loss function by the SVD method, we have to determine the optimum value of the threshold w_ thr below which the corresponding matrix elements of the inverse diagonal matrix 𝐖^-1 are discarded.
For that purpose we scan over a range of possible values for w_ thr and with each value calculate the deconvolved energy loss function f⃗_ SVD and the difference |f⃗_ mod - f⃗_ SVD| to the input energy loss model f⃗_ mod.
Figure <ref> displays on the left the result of this scan.
The threshold values are given in percent of the maximum singular value present in the diagonal matrix 𝐖.
Deconvolved energy loss functions obtained at threshold values of 0.2%, 0.3% and 0.6% are shown together with the input model. The suppression of low singular values in the SVD acts by damping numerical fluctuations in the deconvolved energy loss function. While the result obtained with w_ thr = 0.2% exhibits a significant amount of high frequency noise, a value of w_ thr = 0.6% produces a smoother result at the expense of washing out structures observed in the input model.
Besides this smoothening of the deconvolved function we also note an oscillatory response to large spikes in the input model, that is best visible in the lower right plot of figure <ref>.
An optimum resemblence of the deconvolved function to the input model is found at a threshold value of w_ thr = 0.3%.
It should be noted that the fact, that the SVD method cannot yield the fine structure of the energy loss function (see figure <ref>) in the interval 11 eV≤Δ E ≤ 16 eV at that threshold value, does not matter, since KATRIN`s transmission function for an isotropic electron source as the tritium source will have a width of 0.93 eV. We will investigate the influence on the neutrino mass measurement in section 4.3.
§.§ Bi-CGSTAB results
Figure <ref> displays the result obtained for the energy loss function using the iterative Bi-CGSTAB algorithm described in section <ref>.
While the method does seem to reproduce some of the fine-grained structure related to inelastic electron excitations starting at Δ E ≥ 12 eV, it is obviously much noisier than the results obtained with the SVD method. To counteract the noise, we tried to filter the deconvolved function with a second-order low pass Butterworth filter with a cut off frequency of 1 eV^-1, with the result shown in green in figure <ref>. The filtered function does exhibit lower noise and still follows the general features of the input model.
§.§ Influence on the measured neutrino mass
To assess the influence of the deconvolution process on the determination of the neutrino mass from the experimental beta spectrum measured by the KATRIN experiment, a large number of such measurements have been simulated and analyzed using the different energy loss functions displayed in figures <ref> and <ref>. From the mean of the distribution of the extracted ^2 values we can then estimate the systematic uncertainty caused by the deconvolution process.
The energy spectrum of nuclear decay can be calculated starting from Fermi's golden rule and has the following form <cit.> (with units ħ = c = 1):
dΓ/ dE = G_ F^2 cos^2θ_ C/2π^3 |M|^2 F(E,Z+1) p (E + m_e)
·∑_j 𝒫_j E_,j √(E_,j^2-^2) Θ(E_,j - ) ,
where G_ F is the Fermi coupling constant, θ_ C the Cabibbo angle and M the nuclear matrix element of the transition. The Fermi function F(E, Z+1) takes into account the final state interaction of the emitted electron with the daughter nucleus of charge Z+1 and p(E+m_e) is the phase space factor of the outgoing electron. The product of the neutrino momentum and its energy given by E_ν,j = E_0-E^*_j- E is the phase space of the emitted anti-neutrino, which shapes the spectrum near its endpoint. The neutrino phase space factor has to be summed up over all final states E^*_j of the daughter molecule that are populated with probabilities 𝒫_j. The inclusion of the Θ function in eq. <ref> ensures that E_,j - > 0.
The observable ^2 that can be extracted from the spectral shape near the endpoint is defined by an incoherent sum over the neutrino mass eigenstates m_i weighted by the matrix elements of the U_PMNS mixing matrix <cit.> known from oscillation experiments:
^2 = ∑^3_i=1 | U_ei |^2 m_i^2 .
The simulation of the observed integral spectrum measured by KATRIN uses the electron spectrum described by equation <ref> and takes into account a number of experimental effects:
* The final state distribution of the ^3HeT^+ daughter molecules calculated by Saenz et al. <cit.> as shown in figure <ref> (left).
* Up to four-fold scattering of the electrons within the WGTS according to the same energy loss model as used in the simulations of the deconvolution process.
* The nominal transmission function of the main spectrometer.
* An experimental background rate of 10^-2 counts per second in the energy region of interest at the FPD.
* Statistical fluctuations according to an effective three years worth of data taken at the design value for the column density of the source of 5× 10^17/ cm^2 and with a measurement time distribution as described in the KATRIN Design Report <cit.>.
For the purpose of this study, a vanishing neutrino mass = 0 is assumed.
With these inputs, 1000 hypothetical KATRIN measurements were simulated and subsequently fitted taking into account the same physical effects as in the simulations, but using one of the deconvolved functions to describe the energy losses and using as free parameters
* the squared neutrino mass ^2,
* the spectral endpoint energy E_0,
* the experimental background rate, and
* the overall amplitude of the spectrum.
Figure <ref> (left) shows the resulting distribution of extracted ^2 values obtained with the
energy loss function from the SVD deconvolution at a threshold value of w_ thr = 0.3 %. The mean of the distribution is at μ = (0.0053 ± 0.0005) eV^2 yielding the systematic uncertainty of the deconvolution method, compatible with the KATRIN error budget for this systematic effect of 0.0075 eV^2 <cit.>. The systematic uncertainties corresponding to usage of the energy loss functions obtained with the different threshold values of the SVD algorithm and with the iterative Bi-CGSTAB algorithm (with and without filter), respectively, are compared in figure <ref> (right).
The energy loss functions deconvolved using the SVD algorithm result in significantly lower systematic errors than with the iterative Bi-CGSTAB algorithm and are, for all three threshold values tested, within the error budget defined for this uncertainty in the KATRIN design report <cit.>.
As a cross check, the fits were also run using the input model itself for the energy loss function and the resulting ^2 distribution was indeed centered around zero with μ = (0.0003 ± 0.0005) eV^2.
§ SUMMARY AND OUTLOOK
A method to deconvolve the energy loss function of -decay electrons in the tritium source of the KATRIN experiment from measurements of the response function at different column densities with a mono-energetic electron source has been developed and tested in simplified Monte Carlo simulations of the experiment. Two different algorithms to deconvolve the energy loss function from the single scattering function extracted from the measurements have been tested: Firstly, the so-called Singular Value Decomposition (SVD), and secondly, the Stabilized Biconjugate Gradient (Bi-CGSTAB) method as an example of an iterative algorithm.
Both methods allow to obtain an approximation of the real energy loss function with varying levels of detail and of numerical fluctuations in the deconvolved result. The numerical noise can be dampened in the case of the SVD algorithm by setting an appropriate threshold value for the suppression of small singular values and in the case of the Bi-CGSTAB algorithm by filtering the deconvolved function. When applying the deconvolved energy loss function to the analysis of simulated integral spectra, we find that the SVD method delivers usable results that induce a systematic error below the error budget of 0.0075 eV^2 allocated for this contribution in the KATRIN design report. The Bi-CGSTAB method, however, significantly exceeds the limit, both for the direct and for the filtered result of the deconvolution.
An option to improve the result from SVD deconvolution that has been looked at during simulations is to split the energy loss function into several energy intervals and perform a sectionwise deconvolution of the data. In this way it is possible to choose lower threshold values in the regions of rotational and vibrational states and of inelastic electron excitations where more structure is expected and to dampen numerical fluctuations by higher threshold values in between these regions and in the ionization tail, where we know that the curve is smooth.
With such an approach it was indeed possible to get a closer approximation of the fine structure observed in the input model of the energy loss function. However, the energy loss function deconvolved in this manner led to larger systematic errors in the fits of the neutrino mass spectra and was therefore discarded.
Further improvements of the deconvolved results are possible by increasing the statistics of the response function measurements.
In the simulations we have assumed to collect 10 million events at each voltage step of the measurements. Assuming an event rate of 25 kHz this would result in about 10 days measurement time plus overhead, e.g., for switching between voltage settings during the measurements and for changing the column density of the WGTS in between runs.
Moreover, one might consider including four-fold scattering in the extraction of the single scattering function (see section <ref>) to increase the precision of the subsequently deconvolved energy loss function. However, also this action comes at the cost of increased measurement time, as this would require to measure at yet an additional, higher column density setting of the source.
Finally it could be investigated if going beyond the present precision might be possible by applying Bayesian methods as used to reconstruct spectral functions in lattice QCD <cit.>.
§ ACKNOWLEDGMENTS
This work was supported by the German Federal Ministry of Education and Research (BMBF) under grant no. 05A11PM2 and 05A14PMA.
KV wishes to acknowledge support by the Helmholtz Association (Young Investigators Group VH-NG-1055).
§ REFERENCES
99
KAT04 J. Angrik et al., (KATRIN Collaboration),
KATRIN Design Report 2004,
http://bibliothek.fzk.de/zb/berichte/FZKA7090.pdfWissenschaftliche Berichte, FZ Karlsruhe 7090
WGTS M. Babutzka et al.,
“Monitoring of the properties of the KATRIN Windowless Gaseous Tritium Source”,
http://dx.doi.org/doi:10.1088/1367-2630/14/10/103046New J. Phys. 14 (2012) 103046
Pic92 A. Picard et al.,
A solenoid retarding spectrometer with high resolution and transmission for keV electrons,
http://dx.doi.org/10.1016/0168-583X(92)95119-CNucl. Inst. and Meth. B 63 (1992) 345-358
Ams15 J.F. Amsbaugh et al.,
Focal-plane detector system for the KATRIN experiment,
http://dx.doi.org/10.1016/j.nima.2014.12.116Nucl. Inst. and Meth. A 778 (2015) 40-60
Ase00 V.N. Aseev et al.,
Energy loss of 18 keV electrons in gaseous T_2 and quench condensed D_2 films,
http://dx.doi.org/10.1007/s100530050525Eur. Phys. J. D 10 (2000) 39-52
Gei64 J. Geiger,
Streuung von 25 keV-Elektronen an Gasen,
http://dx.doi.org/10.1007/BF01380873Zeitschrift für Physik 181 (1964) 413-425
Uls72 R.C. Ulsh et al.,
Bethe surface, elastic and inelastic differential cross sections, Compton profile, and binding effects for H_2 obtained by electron scattering with 25 keV incident electrons,
http://dx.doi.org/10.1063/1.1680755J. Chem. Phys. 60 (1974) 103
Glueck F. Glück, computer code scatter, private communication, 2008
Val11 K. Valerius et al.,
Prototype of an angular-selective photoelectron calibration source for the KATRIN experiment,
http://dx.doi.org/10.1088/1748-0221/6/01/P01002JINST 6 (2011) P01002
Bec14 M. Beck et al.,
An angular-selective electron source for the KATRIN experiment,
http://dx.doi.org/10.1088/1748-0221/9/11/P11020JINST 9 (2014) P11020
Beh16 J. Behrens et al.,
A pulsed, mono-energetic and angular-selective UV photo-electron source for the commissioning of the KATRIN experiment,
to be submitted to Eur. Phys. J. C
root R. Brun and F. Rademakers,
ROOT - An object oriented data analysis framework,
http://dx.doi.org/10.1016/S0168-9002(97)00048-XNucl. Inst. and Meth. A 389 (1997) 81-86
numrec W.H. Press, S.A. Teukolsky, W.T. Vetterling and B.P. Flannery,
Numerical Recipes. The Art of Scientific Computing,
http://numerical.recipes/3rd ed., New York: Cambridge University Press (2007)
Sle93 G.L.G. Sleijpen and D.R. Fokkema,
BiCGstab() for linear equations involving unsymmetric matrices with complex spectrum,
https://eudml.org/doc/118652Electronic Transactions on Numerical Analysis 1 (1993) 11-32
meep A.F. Oskooi et al.,
Meep: A flexible free-software package for electromagnetic simulations by the FDTD method,
http://dx.doi.org/10.1016/j.cpc.2009.11.008Computer Physics Communications 181 (2010) 687-702
Ott08 E.W. Otten and C. Weinheimer,
Neutrino mass limit from tritium decay,
http://dx.doi.org/10.1088/0034-4885/71/8/086201Rep. Prog. Phys. 71 (2008) 086201
PDG14 K.A. Olive et al. (Particle Data Group),
Review of Particle Physics,
http://dx.doi.org/10.1088/1674-1137/38/9/090001Chin. Phys. C 38 (2014) 090001
Sae00 A. Saenz, S. Jonsell and P. Froelich,
Improved Molecular Final-State Distribution of HeT^+ for the -Decay Process of T_2,
http://dx.doi.org/10.1103/PhysRevLett.84.242Phys. Rev. Lett. 84 (2000) 242
burnier13 Y. Burnier and A. Rothkopf,
Bayesian Approach to Spectral Function Reconstruction for Euclidian Quantum Field Theories,
http://dx.doi.org/10.1103/PhysRevLett.111.182003Phys. Rev. Lett. 111 (2013) 182003
|
http://arxiv.org/abs/1701.07702v2 | 20170126135305 | Stability and interacting $f(T,\mathcal{T})$ gravity with modified Chaplygin gas | [
"T. Mirzaei Rezaei",
"Alireza Amani"
] | gr-qc | [
"gr-qc",
"hep-th"
] |
st.t.mirzaei@iauamol.ac.ir
a.r.amani@iauamol.ac.ir
Faculty of Sciences, Department of Physics, Ayatollah Amoli Branch, Islamic Azad University,
Amol, Mazandaran, Iran
98.80.-k; 95.36.+x; 95.35.+d
In this paper, the model of interaction is studied between f(T,𝒯) gravity and modified Chaplygin gas in FRW-flat metric. We obtain the Friedmann equations in the framework of teleparallel gravity by vierbein field. We consider that Universe dominates by components of cold matter, dark energy and modified Chaplygin gas. In what follows we separately write the corresponding continuity equations for components of Universe. Also, dark energy EoS and effective EoS are obtained with respect to redshift, thereafter the corresponding cosmological parameters are plotted in terms of redshift, thereinafter the accelerated expansion of the Universe is investigated. Finally, the stability of the model is discussed in phase plane analysis.
Stability and interacting f(T,𝒯) gravity with modified Chaplygin gas
Alireza Amani
December 30, 2023
====================================================================
§ INTRODUCTIONN
The first time expansion of Universe discovered in type Ia supernova (SNe Ia) by Riess et al <cit.>, and then Perlmutter et al <cit.> redemonstrated this issue by 42 supernovae. Also accelerated expansion of Universe reconfirmed by cosmic microwave background (CMB) <cit.> and large scale structure (LSS) <cit.>. Thus, the result of these literature confirms the existence of a mysterious energy called dark energy. One of the biggest challenges of modern theoretical physics is understanding the nature of dark energy. In fact, dark energy is a hypothetical form of energy in space, and constitutes three–quarters of the total energy of the Universe. Dark energy requires strong negative pressure to explain the observed accelerated expansion of the Universe. By using the Einstein field equation, the accelerated expansion is described by a small positive cosmological constant. Also, the discovery illustrates that geometry of the Universe is very close to flat space-time <cit.>. In order to describe dark energy model, numerous studies have been performed in isotropic space-time such as scalar fields (quintessence, phantom,
tachyon and etc.) <cit.>, modified gravity models <cit.>, holographic models <cit.>, interacting models <cit.>, bouncing model <cit.> and braneworld models <cit.>.
Now we intend to describe dark energy model with modified gravity theory. This issue studied in numerous literatures by topics of f(R) gravity <cit.>, f(T) gravity <cit.>, f(𝒯) gravity <cit.>, f(R,T) gravity <cit.>, f(R,𝒯) gravity <cit.> and f(G) gravity <cit.>, in which R, T, 𝒯 and G are Ricci scalar curvature, torsion scalar, trace of the matter energy-momentum tensor and the Gauss-Bonnet term, respectively. We note that cosmological solutions of these models can provide an alternative explanation for the accelerated expansion of Universe.
In general relativity, the Einstein-Hilbert action includes the curvature term that describes gravity, but in teleparallel gravity, the corresponding action includes torsion term that first proposed by Einstein <cit.>. He has been able to introduce the mathematical structure of distant parallelism by a tetrad or vierbein field for unification of electromagnetism and gravity. This issue is caused to replace the Levi-Civita connection in the framework of general relativity with the Weitzenböck connection in teleparallelism <cit.>. This means that the R in the general relativity changes to T in teleparallelism. Its modified form has been done by changing T to f(T) with an arbitrary function in the teleparallelism action so-called f(T) gravity theory.
A model recently proposed by subject of f(T,𝒯) gravity, that one has some benefits in comparison with other models of modified gravity theory <cit.>. One of its benefits is much simpler calculations of numerical solutions, and also is compatible with recent observational data to describe accelerated expansion of the Universe. Since the f(T,𝒯) gravity can be a good candidate for a source of dark energy, then one is a good alternative for the standard gravity model. Hence these are a good motivation for the present job.
One of the other interesting models for the description of dark energy is Chaplygin gas, so one is introduced as a fluid with negative pressure. Chaplygin gas model extended to generalized Chaplygin gas and modified Chaplygin gas, in which the modified Chaplygin gas model is the combination of barotropic model and Chaplygin gas model. The advantage of the modified Chaplygin gas model is consistent with observational data, and also unify dark matter and dark energy <cit.>. Also, other models of Chaplygin gas such as viscous Chaplygin gas, cosmic Chaplygin gas and extended Chaplygin gas studied by Refs. <cit.>. But in this paper we use from a model not so complicated as the modified Chaplygin gas model.
Now, we would like to construct a new model based on interacting f(T,𝒯) gravity with the modified Chaplygin gas. The corresponding model may be a development of several previous jobs that also corresponds with observational data.
This paper is organized as the following:
In Sec. <ref>, we review the general form of f(T,𝒯) gravity in the FRW metric. In Sec. <ref>, we consider the interaction between f(T,𝒯) gravity model and the modified Chaplygin gas, and also obtain the Friedmann equations and equation of state (EoS). In Sec. <ref>, we investigate the stability of our model by phase plane analysis. Finally, in Sec. <ref> we will give result and conclusion for our model.
§ F(T,𝒯) GRAVITY MODEL
We start with a novel theory of modified gravity in which there is an arbitrary function of torsion scalar T and trace of the matter energy-momentum tensor 𝒯. In that case, we can write the corresponding action by
S=∫ e [T+f(T,𝒯)/2 k^2+ℒ_m] d^4x,
where e=det( e^i_ μ), and ℒ_m is the matter Lagrangian density. It should be mention that vierbein field e^i_ μ to be related with metric tensor by relationship g_μν=η_μν e^i_ μ e^i_ ν in which η_ij=diag(-1,+1,+1,+1).
In this paper, Universe is considered as the homogenous and isotropic in the FRW metric in the following form
ds^2=dt^2+a^2(t)(dx^2+dy^2+dz^2),
where a(t) is scale factor. The vierbein field can be written in accordance with metric (<ref>) as e^i_μ=e^μ_i=diag(1,a,a,a). We note that the Levi-Civita connection is as
Γ _μν^ρ=1/2g_^ρσ( ∂_νg_σμ+∂_μ g_σν - ∂_σg_μν).
Ricci Tensor R_μν, asymmetry tensor S_ρ^μν and torsion tensor T^λ_μν are written as follows, respectively
R_μν = ∂_λΓ _μν^λ-∂_μΓ _λν^λ+Γ _μν^λΓ _ρλ^ρ-Γ _νρ^λΓ _μλ^ρ,
s_ρ^μν =1/2( k^μν_ρ+δ _ρ^μT^αν_α-δ _ρ^νT^αμ_α),
T^λ_μν=Γ^λ_νμ-Γ^λ_μν=e_A^ρ( ∂_μe_ν^A-∂_νe_μ^A),
where
k^μν_ρ=-1/2( T^μν_ρ+T^νμ_ρ-T_ρ^μν).
By varying the action (<ref>) with respect to vierbein field, we can obtain the field equation as follows:
S_μ^νρf_TT ∂_ρT+[ e^-1e^i_μ∂_ρ( ee_i^μS_α^νλ)+T^α_λμS_α^νλ]( 1+f_T)+1/4δ _μ^ν T
=S_μ^νρf_TT δ_ρ𝒯 +f_𝒯( Θ _μ^ν+δ _μ^νp/2)-1/4δ _μ^ν f(𝒯 )+k^2/2Θ _μ^ν,
where indices denote the derivative with respect to the corresponding parameters, and symbol Θ _μ^ν is the Energy- momentum tensor of the matter.
Tension scalar and trace of energy-momentum tensor are obtained as the follows:
T=T^λ_μνS_λ^μν=-6H^2,
Θ =𝒯 =(ρ_m-3p_m),
where H, ρ_m and p_m are the Hubble parameter, matter energy density and matter pressure, respectively.
§ INTERACTION BETWEEN F(T,𝒯) GRAVITY AND MODIFIED CHAPLYGIN GAS
A recent cosmological model has been designed using a kind of perfect fluid (Chaplygin gas), that the universe is assumed to be filled with it. In this paper, the modified Chaplygin gas model was used and its mode equation can be expressed as follows:
p_c=Aρ_c-B/ρ_c^α,
where A, ρ_c and p_c are positive constant, energy density and pressure of modified Chaplygin gas, respectively and also we have 0 < α < 1.
In here we consider the Universe dominates by components of matter, dark energy and Chaplygin gas. In that case, the effective energy density and the effective pressure of Universe are written in terms of the aforesaid three components as
ρ_eff=ρ_m+ρ_DE+ρ_c,
p_eff=p_m+p_DE+p_c,
where indices m, DE and c are components of cold mater, dark energy and modified Chaplygin gas, respectively.
As we know, Universe consists a perfect fluid, then the corresponding continuity equation can be written in the following form
ρ̇_eff+3H(ρ_eff+p_eff)=0.
Since there is an energy flow between contents of Universe, thus we separately write the continuity equations of components of Universe as
ρ̇_m+3H(ρ_m+p_m)=Q'-Q,
ρ̇_c+3H(ρ_c+p_c)=Q,
ρ̇_DE+3H(ρ_DE+p_DE)=-Q',
where Q and Q' are interaction terms between the Universe components. Let us consider that there is an energy flow only between the components of dark energy and modified Chaplygin gas, in this way Q=Q'. Also, the amount of this interaction terms is considered as Q=Q'=3b^2Hρ_c in which b^2 is the coupling parameter or transfer strength. In this case, the solutions of Eqs. (<ref>) and (<ref>) are as follows:
ρ_m=ρ_m_0a^-3(1+ω_m),
ρ_c=[ B/η+c_0 a^-3η (α +1)]^1/α +1,
where
η =1+b^2+A.
In this paper, in order to solve the Friedmann equations and to avoid its complexity, we consider a simple particular model of f(T,𝒯) gravity as a linear function of T and 𝒯 in the form f(T,𝒯)=T+γ 𝒯. Therefore, Friedmann equation (<ref>) is deduced as
3H^2=-γ/2𝒯 +2ρ_m+p_m,
-3H^2-2Ḣ =p__m+γ/2𝒯.
Now, the above equations can be written in Friedmann standard as follows.
3H^2=ρ__eff,
-3H^2-2Ḣ =p__eff.
From Eqs. (<ref>) and (<ref>), we can find the density energy and pressure of dark energy in the following form
ρ_DE=( 1+ω_m-γ/2(1-3ω_m) )ρ_m-ρ_c,
p_DE=γ/2( 1-3ω_m)ρ_m-p_c,
where ω_m is the EoS of matter. Also, we define effective EoS of a perfect fluid, and EoS of dark energy as
ω_eff=p_eff/ρ_eff=-1-2Ḣ /3H^2,
ω_DE=p_DE/ρ_DE.
Therefore, Friedmann obtained equations are written based on the redshift parameter (z=a_0/a-1), in that case, the Eqs. (<ref>) and (<ref>) are rewritten by redshift parameter as
ρ_DE=ρ_m_0( 1+ω_m-γ/2(1-3ω_m) )( a_0/1+z)^-3(1+ω_m)- [ B/η+c_0 ( a_0/1+z)^-3η (α +1)]^1/α +1,
p_DE=γ/2( 1-3ω_m)ρ_m_0( a_0/1+z)^-3(1+ω_m)- A[ B/η+c_0 ( a_0/1+z)^-3η (α +1)]^1/α +1
+B/[ B/η+c_0 ( a_0/1+z)^-3η (α +1)]^α/α +1.
By substituting Eqs. (<ref>) into the Eq. (<ref>), and plotting ω_DE versus z. In that case, we see the ω_DE at the present time (z=0) equals to -1.046, which one indicates Universe expanding at an accelerating rate, and also the issue corresponds to work <cit.>.
By drawing figures of energy density ρ_DE and pressure p_DE of dark energy with respect to redshift (Fig. <ref>), we can see that the variations of ρ_DE versus z is positive, and the variations of p_DE versus z is negative, so that these issues confirm the accelerated expansion of the Universe. Also, the Figs. <ref> and <ref> show us that free parameters of the modified Chaplygin gas (A and B) and the interacting term (b) play a very important role to confirm the accelerated expansion of the universe (the obtained results are consistent with <cit.>).
§ STABILITY ANALYSIS
In this section, we intend to investigate the stability conditions of the f(T,𝒯) gravity with modified Chaplygin gas by
the method of phase plane analysis. In that case, we introduce the following new variables as
x= ρ_c/3H^2, y=ρ_m/3H^2 , Z=ρ_DE/3H^2, Δ =B/3H^2ρ _c^α,
by making derivative the parameters x, y and Z with respect to ln a, and by using the continuity equations (<ref>), we can obtain them in the following form
x'=3b^2x-3x-3Ax+3Δ -2xḢ/H^2,
y'=-3(1+ω_m)y-2Ḣ/H^2y,
Z'=-3b^2x-3Z-3/2γ (1-3ω_m)y+3Ax-3Δ -2ZḢ/H^2,
where the prime denotes derivative with respect to ln a. We can find term Ḣ/H^2 by Eq. (<ref>) as
Ḣ/H^2=-3/2-3/2ω_my-3/4γ y+9/4γω_my.
Substituting (<ref>) into (<ref>), the below relationships are deduced
x'=3b^2x-3Ax+3Δ +3ω_mxy+3/2γ xy-9/2γω_mxy
,
y'=-3ω_my+3ω_my^2+3/2γy^2-9/2γω_my^2
,
Z'=-3b^2x-3/2γ y+9/2γω_my+3Ax-3Δ +3ω_myZ+3/2γ yZ-9/2γω_myZ.
We note that the first Friedmann equation is rewritten by aforesaid new variables as
x+y+Z=1.
Now we describe the stability conditions of the model to obtain the critical points that so-called fixed points. The corresponding fixed points have the asymptotical behavior, and they are dependent on the f(T,𝒯) model and modified Chaplygin gas parameters. In that case, we can obtain the fixed points in terms of the corresponding parameters by setting x'=0 and y'=0, and these results are demonstrated in Table <ref>. As we can see, two fixed points are obtained by names fp1 and fp2. The significance of the fixed point is to study the stability of theory and phase transition.
In order to describe properties of the fixed points, we consider linear perturbations as x' → x' + δ x' and y' → y' + δ y' for Eqs. (<ref>) and (<ref>). These perturbations give us two eigenvalues λ_1 and λ_2 for each of the fixed points. We note that stability condition occurs when the eigenvalues to be negative. In what follows, we find details of the eigenvalues of two critical points in table <ref>.
In order to investigate our model we need to describe the properties of fixed points so that the EoS parameter and the deceleration parameter are found as
ω_eff=1/2(2 ω_m +γ-3 γω_m ) y,
q=-1-Ḣ/H^2=1/2+3/2(2 ω_m+γ -3 γω_m) y.
Therefore, the corresponding results are seen in table <ref>. We note that accelerated Universe occurs when q < 0. Tables <ref> and <ref> show us that condition ω_m < -1/3 describes either accelerated Universe or stability condition.
§ CONCLUSION
In this paper, we studied the f(T,𝒯) model by FRW background, in which T and 𝒯 were considered as torsion scalar and trace of the matter energy-momentum tensor, respectively. Then, we interacted the model with modified Chaplygin gas, this means that the effective energy density and the effective pressure of universe were written in terms of three components of matter, Chaplygin gas and dark energy. Since Universe considered as a perfect fluid, hence the effective continuity equation was separated by three continuity equations of Universe components. In that case, we supposed to exist only an energy flow between the components of dark energy and modified Chaplygin gas by interaction term as Q= 3 b^2 H ρ_c. In what follows, the arbitrary function f(T,𝒯) was considered as a linear function of T and 𝒯, and then Friedmann equation has been written in terms of redshift. Nonetheless, the dark energy cosmological parameters such as energy density, pressure and EoS parameter have drawn with respect to redshift. The corresponding figures showed that the Universe is undergoing accelerated expansion, and also EoS parameter is consistent with observational data.
Next, we investigated stability conditions for the model by using of phase plane analysis. Also, we obtained fixed points that these depended on parameters of the model and Chaplygin gas. We considered linear perturbations for the fixed points, and then we obtained the condition of accelerated expansion by calculating corresponding eigenvalues.
99
Riess_1998
A. G. Riess, A. V. Filippenko, P. Challis, A. Clocchiatti, A. Diercks, P. M. Garnavich, R. L. Gilliland, and et al,
The Astronomical Journal 116, no. 3 (1998): 1009.
Perlmutter_1999
S. Perlmutter, G. Aldering, G. Goldhaber, R. A. Knop, P. Nugent, P. G. Castro, S. Deustua, and et al,
The Astrophysical Journal 517, no. 2 (1999): 565.
Bennett_2003
C. L. Bennett, M. Halpern, G. Hinshaw, N. Jarosik, A. Kogut, M. Limon, S. S. Meyer et al,
The Astrophysical Journal Supplement Series 148, no. 1 (2003): 1.
Tegmark_2004
M. Tegmark, M. A. Strauss, M. R. Blanton, K. Abazajian, S. Dodelson, H. Sandvik, X. Wang et al,
Physical Review D 69, no. 10 (2004): 103501.
Weinberg-1989
S. Weinberg,
Reviews of modern physics 61, no. 1 (1989): 1.
Caldwell-2002
R. R. Caldwell,
Physics Letters B, 545(1):23, 2002.
Amani-2011
A. R. Amani,
International Journal of Theoretical Physics, 50(10):3078, 2011.
Sadeghi1-2009
J. Sadeghi, and A. R. Amani,
International Journal of Theoretical Physics, 48(1):14, 2009.
Setare-2009
M. R. Setare, J. Sadeghi, and A. R. Amani,
Physics Letters B, 673(4):241, 2009.
Setare1-2009
M. R. Setare, J. Sadeghi, and A. R. Amani,
International Journal of Modern Physics D, 18, no. 08 (2009): 1291-1301.
Battye-2016
R. A. Battye and F. Pace,
Physical Review D 94, no. 6 (2016): 063513.
Li-2012
M. Li, T. Qiu, Y. Cai and X. Zhang,
Journal of Cosmology and Astroparticle Physics 2012, no. 04 (2012): 003.
Khurshudyan-2014
M. Khurshudyan, E. Chubaryan and B. Pourhassan,
International Journal of Theoretical Physics 53, no. 7 (2014): 2370-2378.
Khurshudyan-2015
M. Khurshudyan, B. Pourhassan, R. Myrzakulov and S. Chattopadhyay,
Astrophysics and Space Science 356, no. 2 (2015): 383-391.
Sadeghi-2013
J. Sadeghi, M. Khurshudyan, A. Movsisyan and H. Farahani,
Journal of Cosmology and Astroparticle Physics 2013, no. 12 (2013): 031.
Sadeghi-2014
J. Sadeghi, M. Khurshudyan and H. Farahani,
International Journal of Modern Physics D (2014): 1650108.
Pourhassan-2014
B. Pourhassan and J. Naji,
International Journal of Modern Physics D (2014): 1750012.
MKhurshudyan11-2015
M. and B. Pourhassan,
International Journal of Theoretical Physics 54, no. 9 (2015): 3251-3267.
Banijamali1-2016
A. Banijamali and E. Ghasemi,
International Journal of Theoretical Physics 55, no. 8 (2016): 3752-3760.
Banijamali2-2012
A. Banijamali and B. Fazlpour,
General Relativity and Gravitation 44, no. 8 (2012): 2051-2061.
JSadeghi-2015
J. Sadeghi and H. Farahani,
MKhurshudyan1-2015
M. Khurshudyan, A. Pasqua, J. Sadeghi and H. Farahani,
Chinese Physics Letters 32, no. 10 (2015): 109501.
Behrouz-2017
N. Behrouz, K. Nozari and N. Rashidi,
Physics of the Dark Universe 15 (2017): 72-81.
Amani1-2015
A. R. Amani and S. L. Dehneshin,
Canadian Journal of Physics 93.12 (2015): 1453-1459.
Iorio-2016
L. Iorio, M. L. Ruggiero, N. Radicella and E. N. Saridakis,
Physics of the Dark Universe 13 (2016): 111-120.
Faraoni-2016
V. Faraoni,
Physics of the Dark Universe 11 (2016): 11-15.
Khurshudyan1-2014
M. Khurshudyan, B. Pourhassan and A. Pasqua,
Canadian Journal of Physics 93, no. 4 (2014): 449-455.
Banijamali-2016
A. Banijamali, B. Fazlpour, R. Rouhollahi and M. Gholamzadeh,
Chinese Journal of Physics 54, no. 1 (2016): 97-103.
Sadeghi1-2016
J. Sadeghi, B. Pourhassan, A. S. Kubeka and M. Rostami,
International Journal of Modern Physics D 25, no. 07 (2016): 1650077.
Sadeghi2-2016
J. Sadeghi and H. Farahani,
Physics Letters B 751 (2015): 89-95.
Banijamali3-2012
A. Banijamali and B. Fazlpour,
Astrophysics and Space Science 340, no. 2 (2012): 399-406.
Wei-2009
H. Wei,
Communications in Theoretical Physics, 52(4):743, 2009.
Amani1-2011
A. R. Amani, J. Sadeghi, H. Farajollahi, and M. Pourali,
Canadian Journal of Physics, 90(1):61, 2011.
Amani-2015
A. R. Amani, and A. Samiee-Nouri,
Communications in Theoretical Physics, 64(4):485, 2015. " arXiv preprint arXiv:1410.4172 (2014).
Li-2004
M. Li,
Physics Letters B 603, no. 1 (2004): 1-5.
Campo-2011
S. Del Campo, J. C. Fabris, R. Herrera and W. Zimdahl,
Physical Review D 83, no. 12 (2011): 123006.
Hu-2015
Y. Hu, M. Li, N. Li and Z. Zhang,
Journal of Cosmology and Astroparticle Physics 2015, no. 08 (2015): 012.
Fayaz-2015
V. Fayaz, H. Hossienkhani, A. Pasqua, M. Amirabadi, and M. Ganji,
The European Physical Journal Plus 130, no. 2 (2015): 1-12.
Saadat1-2013
H. Saadat,
International Journal of Theoretical Physics 52, no. 3 (2013): 1027-1032.
Banijamali3-2011
A. Banijamali, M. R. Setare and B. Fazlpour,
International Journal of Theoretical Physics 50, no. 10 (2011): 3275-3283.
Amani-2013
A. R. Amani, C.Escamilla-Rivera, and H. R. Faghani,
Phys. Rev. D 88:124008, 2013.
Amani-2014
A. R. Amani, and B. Pourhassan,
International Journal of Geometric Methods in Modern Physics, 11(08):1450065, 2014.
Naji-2014
J. Naji, B. Pourhassan, and A. R. Amani,
International Journal of Modern Physics D, 23, no. 02 (2014): 1450020.
Morais-2017
J. Morais, M. Bouhmadi-Lopez, K. Sravan Kumar, J. Marto and Y. Tavakoli,
Physics of the Dark Universe 15 (2017): 7-30.
KhurshudyanB-2014
M. Khurshudyan, B. Pourhassan and E. O. Kahya,
International Journal of Geometric Methods in Modern Physics 11, no. 06 (2014): 1450061.
KhurshudyanJ-2014
M. Khurshudyan, J. Sadeghi, M. Hakobyan, H. Farahani and R. Myrzakulov,
The European Physical Journal Plus 129, no. 6 (2014): 119.
Zhang1-2017
Y. Zhang,
Physics of the Dark Universe 15 (2017) 82.
Bouhmadi1-2016
M. Bouhmadi-Lopez, J. Morais and A. Zhuk,
Physics of the Dark Universe 14 (2016): 11-20.
KhurshudyanJ1-2015
M. Khurshudyan, J. Sadeghi, A. Pasqua, S. Chattopadhyay, R. Myrzakulov and H. Farahani,
International Journal of Theoretical Physics 54, no. 3 (2015): 749-760.
KhurshudyanJ2-2014
M. Khurshudyan, J. Sadeghi, M. Hakobyan, H. Farahani and R. Myrzakulov,
The European Physical Journal Plus 129, no. 6 (2014): 119.
SadeghiKhurshudyanJ-2014
J. Sadeghi, M. Khurshudyan, M. Hakobyan and H. Farahani,
International Journal of Theoretical Physics 53, no. 7 (2014): 2246-2260.
SadeghiKhurshudyanJ1-2014
J. Sadeghi, B. Pourhassan and Z. Abbaspour Moghaddam,
International Journal of Theoretical Physics 53, no. 1 (2014): 125-135.
Sadeghi-2009
J. Sadeghi, F. Milani, and A. R. Amani,
Modern Physics Letters A 24, no. 29 (2009): 2363-2376.
Sadeghi-2010
J. Sadeghi, M. R. Setare, A. R. Amani, and S. M. Noorbakhsh,
Physics Letters B 685, no. 4 (2010): 229-234.
Amani-2016
A. R. Amani,
International Journal of Modern Physics D 25, no. 06 (2016): 1650071.
Singh-2016
T. Singh, R. Chaubey and A. Singh,
Canadian Journal of Physics 94, no. 7 (2016): 623-627.
Sahni-2003
V. Sahni, and Y. Shtanov,
Journal of Cosmology and Astroparticle Physics,
2003(11):014, 2003.
Setare-2008
M. R. Setare, J. Sadeghi, and A. R. Amani,
Physics Letters B, 660(4):299, 2008.
Brito-2015
G. P. de Brito, J. M. Hoff da Silva, P. Michel LT da Silva, and A. de Souza Dutra,
International Journal of Modern Physics D, 24, no. 11 (2015): 1550089.
Nojiri_2006
S. Nojiri and S. D. Odintsov,
Physical Review D 74.8 (2006): 086005.
Nojiri_2007
S. Nojiri and S. D. Odintsov,
Physics Letters B 657.4 (2007): 238-245.
Linder_2010
E. V. Linder,
Physical Review D 81.12 (2010): 127301.
Myrzakulov_2011
R. Myrzakulov,
The European Physical Journal C 71.9 (2011): 1-8.
Li_2011
B. Li, T. P. Sotiriou and J. D. Barrow,
Physical Review D 83.6 (2011): 064035.
Myrzakulov_2012
R. Myrzakulov,
arXiv preprint arXiv:1205.5266 (2012).
Harko_2011
T. Harko, F. S. N. Lobo, S. Nojiri and S. D. Odintsov,
Physical Review D 84.2 (2011): 024020.
Nojiri_2005
S. Nojiri and S. D. Odintsov,
Physics Letters B 631.1 (2005): 1-6.
Einstein_1928
A. Einstein,
Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-mathematische Klasse,
17, (1928) 224227.
Weitzenbock_1923
R. Weitzenbock,
P. Noordhoff, 1923.
Bengochea_2009
G. R. Bengochea and R. Ferraro,
Physical Review D 79, no. 12 (2009): 124019.
Harko_2014
T. Harko, S. L. Francisco, G. Otalora and E. N. Saridakis,
Journal of Cosmology and Astroparticle Physics 2014, no. 12 (2014): 021.
Kamenshchik_2001
A. Kamenshchik, U. Moschella and V. Pasquier,
Physics Letters B 511, no. 2 (2001): 265-268.
Marttens-2017
R. F. vom Marttens, L. Casarini, W. Zimdahl, W. S. Hipolito-Ricaldi and D. F. Mota,
Physics of the Dark Universe 15 (2017): 114-124.
Jawad-2017
A. Jawad, A. Ilyas and S. Rani,
The European Physical Journal C 77, no. 2 (2017): 131.
Kahya-2015
E. O. Kahya, B. Pourhassan and S. Uraz,
Physical Review D 92, no. 10 (2015): 103511.
Avelino-2014
P. P. Avelino, K. Bolejko and G. F. Lewis,
Physical Review D 89, no. 10 (2014): 103004.
Pourhassan-2013
B. Pourhassan,
International Journal of Modern Physics D 22, no. 09 (2013): 1350061.
Kahya-2014
E. O. Kahya and B. Pourhassan,
Astrophysics and Space Science 353, no. 2 (2014): 677-682.
Pourhassan-2016
B. Pourhassan,
Canadian Journal of Physics 94, no. 7 (2016): 659-670.
KahyabB-2015
E. O. Kahya and B. Pourhassan,
Modern Physics Letters A 30, no. 13 (2015): 1550070.
SaadatA-2014
H. Saadat,
International Journal of Theoretical Physics 53, no. 12 (2014): 4160-4169.
NajiJ-2014
J. Naji and Hassan Saadat,
International Journal of Theoretical Physics 53, no. 5 (2014): 1547-1560.
SaadatB-2013
H. Saadat,
International Journal of Theoretical Physics 52, no. 11 (2013): 3902-3907.
SaadatC-2013
H. Saadat,
International Journal of Theoretical Physics 52, no. 5 (2013): 1696-1700.
SadeghiB-2016
J. , M. Khurshudyan and H. Farahani,
International Journal of Theoretical Physics 55, no. 1 (2016): 81-97.
SaadatyaB-2013
H. Saadat and B. Pourhassan,
International Journal of Theoretical Physics 52, no. 10 (2013): 3712-3720.
SadeghiaB-2014
J. Sadeghi, B. Pourhassan, M. Khurshudyan and H. Farahani,
International Journal of Theoretical Physics 53, no. 3 (2014): 911-920.
SaadataB-2014
H. Saadat and B. Pourhassan
International Journal of Theoretical Physics 53, no. 4 (2014): 1168-1173.
KhodamB-2016
A. Khodam-Mohammadi and E. KarimkhaniA. Alaei, Eur. Phys. J. Plus 131 (2016) 398.
SaadatmB-2013
H. Saadat and B. Pourhassan
Astrophysics and Space Science 343, no. 2 (2013): 783-786.
SaadatmBB-2013
H. Saadat and B. Pourhassan
Astrophysics and Space Science 344, no. 1 (2013): 237-241.
PourhassanBB-2014
B. Pourhassan and E. O. Kahya
Results in Physics 4 (2014): 101-102.
PourhassanmBB-2014
B. Pourhassan and E. O. Kahya,
Advances in High Energy Physics 2014 (2014).
KahyaB-2015
E. O. Kahya, E. O., M. Khurshudyan, B. Pourhassan, R. Myrzakulov and A. Pasqua,
The European Physical Journal C 75, no. 2 (2015): 43.
PourhassanatmBB-2016
B. Pourhassan,
Physics of the Dark Universe 13 (2016): 132-138.
Shi_2011
K. Shi, Y.-F. Huang, and T. Lu, Research in Astronomy and Astrophysics 11, 1403 (2011).
Amanullah_2010
R. Amanullah, C. Lidman, D. Rubin, G. Aldering, P. Astier, K. Barbary, M. S. Burns et al,
The Astrophysical Journal 716, no. 1 (2010): 712.
Kumar_2014
S. Kumar, Suresh and L. Xu,
Physics Letters B 737 (2014): 244-247.
|
http://arxiv.org/abs/1701.07500v1 | 20170125215128 | Scalable Architecture for Anomaly Detection and Visualization in Power Generating Assets | [
"Paras Jain",
"Chirag Tailor",
"Sam Ford",
"Liexiao Ding",
"Michael Phillips",
"Fang Liu",
"Nagi Gebraeel",
"Duen Horng Chau"
] | cs.DC | [
"cs.DC"
] |
Scalable Architecture for Anomaly Detection
and Visualization in Power Generating Assets
Paras Jain1,
Chirag Tailor1,
Sam Ford1,
Liexiao (Richard) Ding3,
Michael Phillips3,
Fang (Cherry) Liu4,
Nagi Gebraeel3,
Duen Horng (Polo) Chau1
1College of Computing
3H. Milton Stewart School of Industrial & Systems Engineering
4Partnership for an Advanced Computing Environment
Georgia Institute of Technology
Atlanta, Georgia, USA.
Email: {paras, chirag.tailor, sford100, richard.ding, mphillips68, polo}@gatech.edu,
fang.liu@oit.gatech.edu, nagi.gebraeel@isye.gatech.edu
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Power-generating assets (e.g., jet engines, gas turbines) are often instrumented with tens to hundreds of sensors for monitoring physical and performance degradation. Anomaly detection algorithms highlight deviations from predetermined benchmarks with the goal of detecting incipient faults.
We are developing an integrated system to address three key challenges within analyzing sensor data from power-generating assets:
(1) difficulty in ingesting and analyzing data from large numbers of machines;
(2) prevalence of false alarms generated by anomaly detection algorithms resulting in unnecessary downtime and maintenance; and
(3) lack of an integrated visualization that helps users understand and explore the flagged anomalies and relevant sensor context in the energy domain.
We present preliminary results and our key findings in addressing these challenges.
Our system's scalable event ingestion framework, based on OpenTSDB, ingests nearly 400,000 sensor data samples per seconds using a 30 machine cluster.
To reduce false alarm rates, we leverage the False Discovery Rate (FDR) algorithm which significantly reduces the number of false alarms.
Our visualization tool presents the anomalies and associated content flagged by the FDR algorithm to inform users and practitioners in their decision making process.
We believe our integrated platform will help reduce maintenance costs significantly while increasing asset lifespan.
We are working to extend our system on multiple fronts, such as scaling to more data and more compute nodes (70 in total).
False discovery rate, visualization, OpenTSDB, power asset, energy sensor
§ INTRODUCTION
To improve public safety, modern power generating assets (e.g., jet engines, gas turbines) are instrumented with hundreds of sensors to monitor physical performance degradation. Such sensors, such as temperature or pressure, is installed with the goal of measuring potential signals of asset failure. Given the scale of data produced, it is impossible for humans to directly monitor every signal. Instead, this monitoring process can be automated by studying possible deviations from pre-specified benchmarks, with the goal of detecting incipient faults.
Anomaly detection involves defining a pattern in observations that represent normal behavior and declaring observations that do not belong to that region as anomalies. A multitude of detection algorithms and techniques have been developed and commercialized over the years, many of which have been applied in the manufacturing domain <cit.> for what has become known as Statistical Process Control (SPC).
In large scale settings involving thousands of power generating assets where each asset is monitored by a large number of sensors, the problem of false alarms becomes a significant challenge.
False alarms can be very costly — for example, 50% of replaced parts in aircraft are classified as “no fault found” <cit.>. In energy and aerospace domains, reducing rates of false alarms can measurably reduce lifetime maintenance costs for power generating assets.
Our goal is to develop an integrated system
that reduces false alarms in multi-stream condition monitoring of power generating assets
using a scalable analytics architecture that ingests, stores and analyzes large amounts of sensor data, and interactively visualize the computation results to enhance user understanding and advance decision making capabilities.
Our ongoing work's contributions are:
* We present a scalable event ingestion and storage architecture that can handle 399,000 sensor samples per second with a 30 node storage cluster.
* We adapt the FDR algorithm to the energy domain in order to reduce the rate of falsely identified anomalies.
* We demonstrate a visualization tool that enables interactive exploration of power-generating asset sensor data with associated anomalies.
§ SYSTEM OVERVIEW
Our proposed architecture consists of three key components:
the anomaly detection algorithm, streaming sensor data ingestion,
and interactive visualization. <ref> summarizes our envisioned system architecture.
Data storage is non-trivial with large scale deployments of assets generating huge volumes of data. Our system will need to eventually scale to process hundreds of thousands of sensor samples per second.
After evaluating candidate storage solutions, we chose to build our platform over the Open Time Series Database (OpenTSDB) <cit.> to store both streaming sensor data as well as flagged anomalies (Section <ref>).
OpenTSDB is supported by HBase <cit.> as the underlying distributed file storage system.
The choice of anomaly detection algorithm is central to the project. A survey <cit.> of techniques for anomaly detection divides traditional algorithms into several categories: rule-based systems, statistical techniques, spectral techniques and neural networks. As the primary criteria for anomaly detection is to control the rate of false positives, we specifically chose the False Discovery Rate algorithm <cit.> (described in <ref>).
Offline evaluation of the anomaly detection algorithm currently executes in the Spark framework <cit.> in batch mode.
Given its rich distributed matrix computation libraries,
Spark is a natural choice for evaluating the FDR algorithm.
Both the online anomaly detection and streaming sensor data ingestion components run on the Data Science Platform (DSP) <cit.> built at Georgia Tech College of Computing. The online anamoly detection component runs on a 44 nodes HDFS/Hadoop/Spark cluster, while the streaming sensor data ingestion runs on a 32 nodes HDFS/HBase/OpenTSDB cluster.
Our visualization tool allows for operators to interactively explore large volumes of time-series data (<ref>). By enriching sensor data with potential anomalies and integrated real-time analytics, the platform can serve as a powerful control center to monitor faults across large sensor networks.
§.§ Evaluation Dataset
Real datasets from industry partners contain actual sensor readings from power generating assets like gas turbine and jet engines.
However, these datasets often include sensitive information, and currently are not available for off-site evaluation.
As our very first goals of developing the system are to investigate the algorithmic scalability, visualization capabilities, and hardware platform requirements, access to a real dataset is not critical to our current stage of investigation.
Therefore, we generated a dataset for training and evaluation of the algorithm.
This allows measuring the exact degree to which FDR reduces false alarm rates while allowing us to verify the algorithms ability to detect various classes of injected faults.
The training dataset contains 100 simulated units, each with 1000 sensors (on order with the 3000 sensors in the Siemens SGT5-8000H gas turbine <cit.>). We modeled three primary categories of faults:
* Pure random noise for comparison
* Pure random noise plus gradual degradation signal
* Pure random noise plus sharp shift
Injected faults are correlated across sensors which allows measuring the algorithm's response to deviations across multiple signals.
§ SCALABLE DATA INGESTION & STORAGE
Processing and storing sensor samples is non-trivial at the scale of a production deployment.
We expect our system will need to ingest and analyze at least 100,000 sensor samples per second (based on estimation in <ref>, assuming each sensor will generate data at 1Hz).
This project utilizes OpenTSDB <cit.>, an open-source, scalable, time series database which leverages HBase <cit.>, an Apache top level project inspired by Google's BigTable <cit.>, to manage data in a distributed manner and provide horizontal scalability.
We chose OpenTSDB because
it allows us to easily horizontally scale out our system to more storage nodes, while maintaining a stable, linear scaleup in streaming ingestion (as we shall describe in Section <ref>).
§.§ Streaming Event Ingestion and Storage using OpenTSDB
For the purposes of this preliminary work, a distributed system of 32 nodes was deployed running HDFS and HBase. HDFS was set up with one NameNode (co-running Hbase master), one Secondary NameNode, one Hbase backup master and 29 DataNodes. HBase is configured with one HMaster, one BackupHMaster, and 29 Regionservers that communicate through the built-in Apache Zookeeper <cit.> coordination service. Each node is also running an instance of a TSD Daemon for time series data writing and querying.
OpenTSDB organizes time series data into metrics and allows for the assignment of multiple tags per metric. The tags provide unique identifiers for querying data and allow the data to be compartmentalized into sub series. The simulated data generated for this project is stored into a metric called “energy” with tags for “unit” and “sensor”.
For storing data, the TSD Daemon takes a metric, timestamp, data value, and tag identifiers as input and produces an entry to be written to an HBase table. First, the key is generated from a binary encoding of the metric, timestamp, and tag values. Then, the TSD Daemon submits an RPC call to HBase which distributes the write based upon the key value. The writes for similar keys are grouped onto the same Regionserver by HBase.
§.§ Preliminary Results & Key Findings
Linear Scale-up. We tested our event ingestion framework with the evaluation dataset (described in Section <ref>, consisting of 100 assets with 1000 sensors each). <ref>a shows the ingestion rate scales up linearly with the number of machines. Each machine runs one HBase RegionServer instance and one OpenTSDB daemon.
The ingestion rate reaches 300,000 samples per second with 30 machines, while maintaining a stable ingestion speed, shown in <ref>b.
OpenTSDB Key Design.
One obstacle encountered early in the ingestion process was an issue with these writes not being distributed across all the HBase Regionservers efficiently. Since sequential data values share the same metric and similar timestamp values, their binary encoded keys are also similar resulting in the RPC calls being sent to the same HBase Regionserver. To combat this, the binary key encodings were salted with an additional uniformly randomly generated byte at the beginning to create unique keys for chronological data values. Additionally, HBase regions were manually split to ensure each region handled an equal proportion of the writes. Salting the keys allowed for the full utilization of all the deployed HBase Regionservers and provided a dramatic increase to the ingestion rate.
Buffering Requests for Backpressure and Scalability.
Another obstacle encountered while working with OpenTSDB and HBase was frequent crashes of Regionservers due to overloaded RPC Queues.
Initially, the cause of these crashes was attributed to HBase having no means of providing back pressure to RPC calls made by OpenTSDB.
To remedy this, we built a reverse proxy to buffer requests to OpenTSDB in order to limit the number of concurrent requests. Compaction was also disabled on OpenTSDB to reduce RPC calls to HBase.
This proxy also serves to increase ingestion throughput by load-balancing traffic to multiple ingestion processes. Ingestion throughput scales horizontally by distributing the requests to the OpenTSDB nodes via a round-robin fashion.
§ FLAGGING ANOMALIES WITH
LOW FALSE ALARM RATES
A key component in our system is the algorithm to flag potential anomalies. Given the large expense caused by erroneously flagged faults, we aim to use an anomaly detection algorithm that balances identifying the majority of true faults while also controlling the rate of false alarms.
From a statistical standpoint, anomaly detection amounts to performing a hypothesis test on sample observations to detect possible shifts in the mean of the sampling distribution. Rejection of the null hypothesis implies that there is significant evidence to conclude that the distribution has indeed changed. A common mistake committed in hypothesis testing is to reject the null hypothesis when it is actually true (type I error).
In our setting, a type I error amounts to a false alarm, i.e., an equipment is classified as faulty when in reality there was nothing wrong. One of the key aspects of type I errors is that they tend to increase as the number of hypothesis tests increases. In our context, this translates to higher false alarm rates as the number of sensors increases. For example, for a single sensor with an allowable α=0.05, the probability of making at least one false alarm is 5%. However, if we increase the number of sensors to 10 sensors each with α=0.05, that probability jumps to 40% , i.e., 1-(1-α)^10 = 0.4.
Traditionally, false alarms in multi-inference studies were controlled using family-wise error rate (FWER). FWER focuses on controlling the probability of committing any type I error when performing multiple hypothesis tests by applying a correction to a family of inferences. A popular example is the Bonferroni correction <cit.> where for m hypotheses each having a probability α of committing type I error, then the corrected probability for the family would be α/m. In other words, reject all hypotheses with p-values ≤α/m. One drawback of this approach was that it provided much less detection power and was overly conservative.
FDR was first introduced by Benjamini and Hochberg in 1995 <cit.>. The goal was to reduce false alarms in multiple inferences (hypothesis tests) in clinical trials. Compared to FWER, FDR was designed to control the expected proportion of type I errors.
The underlying premise of FDR is that when multiple tested hypotheses are rejected, it is more important to control the proportion of errors (wrong rejections) than it is a single erroneous rejection. As we wish to balance the rate of type I and type II errors, FDR is a promising option for a choice of algorithm for preliminary testing.
§.§ Preliminary Results
Our implementation of the FDR algorithm is composed of two parts — an offline training component and an online evaluation component.
Offline training occurs in Spark, running in batch mode.
Spark's MLlib <cit.> provides an implementation of distributed matrix factorization, which allows our offline training system to scale to large numbers of sensors. Once training is complete, we can evaluate for anomalies at a rate of 939,000 sensor samples per second on average.
In offline training, model estimation of each sensor on each unit begins by calculating the covariance matrix of each data set. Singular Value Decomposition is then performed on each covariance matrix to obtain the mean and variance. Results from the decomposition are cached to HDFS. Evaluation is thereby relatively fast requiring a single matrix multiplication per iteration. Results from online evaluation are reported back to OpenTSDB for use by the integrated visualization tool. The current system can deal with one machine at a time and we plan to utilize concurrency of Spark to scale up workload.
§ ANOMALY VISUALIZATION
Advanced anomaly detection without commensurate visualization presents limited value for operators. Our platform includes an interactive visualization tool that equips users to quickly respond to flagged anomalies. Our tool integrates A) live sensor data, B) highlighted anomalies, and C) real-time system analytics into a single control center for users to monitor and react to events in a network of power-generating assets.
§.§ Preliminary Results
Our tool provides an overview of the overall health of the network of power-generating assets. It helps users explore and understand the context surrounding the flagged anomalies. By using the FDR anomaly detection algorithm, we avoid unnecessarily notifying users of false alarms.
Analytics summarize global system status across a large deployment of power-generating assets. By selectively surfacing the most concerning anomalies, we allow users to focus only on what is important. Unit status is summarized neatly into a single status bar as seen at the top of <ref>.
Power-generating asset faults often result in correlated anomalies across multiple sensors (e.g., pressure and temperature). Our tool displays all sensor readings with relevant anomalies annotated directly on a compact sparkline chart as seen in the center of <ref>.
Drill-down capabilities enable users to quickly examine details about a fault with necessary context. Operators can click on anomalies which surfaces a detailed view of the sensor data, as shown at at the bottom of <ref>.
The visualization tool is a web application that is available on both desktop and mobile devices. Mobile access allows technicians to explore pertinent sensor data while performing maintenance on a particular machine in the field.
§ CONCLUSION & ONGOING WORK
We present our preliminary work and key findings in developing
an integrated system to address three key challenges within analyzing sensor data from power-generating assets:
(1) difficulty in ingesting and analyzing data from large numbers of machines;
(2) prevalence of false alarms generated by anomaly detection algorithms resulting in unnecessary downtime and maintenance; and
(3) lack of an integrated visualization that helps users understand and explore the flagged anomalies and relevant sensor context in the energy domain.
Our system can currently ingest and analyze 399,000 sensor samples per second while running on a 30 node cluster. The system visualizes sensor data in an interactive web application which presents potential anomalies with the associated context surrounding the event.
Ongoing work for the project includes:
experimenting with increasing storage nodes to further scale up throughput,
migrating our anomaly detection implementation to Spark Streaming <cit.> for online training,
and evaluating our system with domain users through our collaboration with industry partners like General Electric (GE) to test the system on their datasets.
§ ACKNOWLEDGMENT
This work is supported in part by the Strategic Energy Institute (SEI) at Georgia Tech, and NSF grants IIS-1563816, TWC-1526254, IIS-1217559.
We also thank Yahoo! for their generous 200-machine donation.
We thank Will Powell on hardware support for our system.
IEEEtran
|
http://arxiv.org/abs/1701.07837v2 | 20170126190014 | Path to stable quantum spin liquids in spin-orbit coupled correlated materials | [
"Andrei Catuneanu",
"Youhei Yamaji",
"Gideon Wachtel",
"Yong Baek Kim",
"Hae-Young Kee"
] | cond-mat.str-el | [
"cond-mat.str-el"
] |
Department of Physics and Center for Quantum Materials, University of Toronto, 60 St. George St., Toronto, Ontario, M5S 1A7, Canada
Department of Applied Physics and Quantum-Phase Electronics Center (QPEC), The University of Tokyo, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
JST, PRESTO, Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
Department of Physics and Center for Quantum Materials, University of Toronto, 60 St. George St., Toronto, Ontario, M5S 1A7, Canada
Department of Physics and Center for Quantum Materials, University of Toronto, 60 St. George St., Toronto, Ontario, M5S 1A7, Canada
Canadian Institute for Advanced Research, Toronto, Ontario, M5G 1Z8, Canada
Department of Physics and Center for Quantum Materials, University of Toronto, 60 St. George St., Toronto, Ontario, M5S 1A7, Canada
Canadian Institute for Advanced Research, Toronto, Ontario, M5G 1Z8, Canada
The spin liquid phase is one of the prominent strongly interacting topological phases of matter whose unambiguous confirmation is yet to be reached despite intensive experimental efforts on numerous candidate materials.
Recently, a new family of correlated honeycomb materials, in which strong spin-orbit coupling allows for various bond-dependent spin interactions, have been promising candidates to realize the Kitaev spin liquid.
Here we study a model with bond-dependent spin interactions and show numerical evidence for the existence of an extended quantum spin liquid region, which is possibly connected to the Kitaev spin liquid state.
These results are used to provide an explanation of the scattering continuum seen in neutron scattering on α-RuCl_3.
Path to stable quantum spin liquids in spin-orbit coupled correlated materials
Hae-Young Kee
===============================================================================
Introduction
The role of strong interaction between electrons in the emergence of topological phases of matter is currently a topic of intensive research.
The archetypal example of a topological phase with strong electron interaction is the quantum spin liquid<cit.>, in which the elementary excitations are charge-neutral fractionalized particles.
While a lot of progress has been made on the theoretical understanding of the quantum spin liquid phase, its direct experimental confirmation has remained elusive despite various studies on a number of candidate materials<cit.>.
Significant progress, however, has recently been made due to the availabilty of a new class of correlated materials, where strong spin-orbit coupling leads to various bond-dependent spin interactions<cit.>, thus resulting in magnetic frustation.
These materials are Mott insulators with 4d and 5d transition metal elements, which include iridates and ruthenates<cit.> and come in two- or three-dimensional honeycomb variants.
They have been particularly exciting because they intrinsically have a strong Kitaev interaction and therefore could potentially realize the Kitaev spin liquid (KSL) phase – an example of a ℤ_2 quantum spin liquid where the electron's spin-1/2 fractionalizes into two degrees of freedom: itinerant Majorana fermions and ℤ_2 fluxes.
While the Kitaev interaction (K) in these materials is large, it competes with symmetry allowed nearest-neighbor (n.n.) symmetric off-diagonal (Γ) and Heisenberg (J) spin interactions<cit.>.
For example, in α-RuCl_3 (RuCl_3), an actively studied KSL candidate, comprehensive ab initio computations and recent dynamical studies<cit.> suggest that ferromagnetic K and antiferromagnetic Γ interactions are dominant and comparable in magnitude, while J is negligible<cit.>.
The balance of these and additional small further neighbor interactions causes RuCl_3 and other KSL candidates to magnetically order at low temperature; however, it is still unclear whether or not the often-large Γ interaction prefers magnetic ordering.
Meanwhile, the community has attempted to revive the possibility of a KSL in RuCl_3 by applying a small magnetic field, with the effect of entering a potential spin liquid phase with no magnetization<cit.>.
Since a weak magnetic field takes RuCl_3 out of the ordered phase, it lends credence to the idea that the zig-zag phase is stabilized by small interactions at comparable energy scale to the magnetic field, such as a 3rd n.n. Heisenberg J_3<cit.> term or terms coming from slight trigonal distortion<cit.>.
This calls into question the role of the Γ interaction.
Interestingly, a recent analysis of the Γ model revealed a macroscopically degenerate classical ground state<cit.>.
In this work we will thus investigate if a model with K and Γ hosts an extended quantum spin liquid phase.
A previous exact diagonalization study on a 24 site honeycomb cluster hints that the ferromagnetic KSL is unstable after perturbing with a small Γ, but the resulting phase is not orderered<cit.>.
On the other hand, it is known that the KSL is stable upon introducing bond anisotropy, which is present in real materials as depicted in Fig. <ref>a.
We indeed find that such anisotropy can extend the KSL phase between the -K and Γ limits, as shown in Fig. <ref>b.
We consider the following nearest-neighbor (n.n.) model on a two-dimensional honeycomb lattice:
H = ∑_γ∈ x,y,z H^γ,
where
H^z = ∑_⟨ ij⟩∈ z-bond [K_z S^z_iS^z_j + Γ_z(S_i^xS_j^y+S^y_iS^x_j)]
and H^x,y are defined similarly with corresponding K_x,y and Γ_x,y.
Each H^γ represents the n.n spin interactions along one of the three bond directions, γ = x, y, z. The model is parameterized by K_z = -(1 + 2a_K)cosϕ, K_x,y = -(1-a_K)cosϕ, Γ_x,y,z = sinϕ, with a_K characterizing bond anisotropy. When ϕ = 0,π (i.e. Γ_γ = 0), this model reduces to the exactly solvable Kitaev model with the KSL ground state.
We have studied this model using a combination of three different, corroborating, numerical methods: exact diagonalization (ED) on a 24-site honeycomb cluster, the method of thermal pure quantum states<cit.>, and infinite time-evolution block decimation (iTEBD).
We have first reproduced the earlier work in the isotropic a_K = 0 limit, showing a strong first-order transition between -K and Γ limits (see Supplementary Materials (SM)).
We present the following results using our numerical techniques:
1) When a_K ≥ 0.06, we find that the -K_γ (0 ≤ϕ≤π/2) and Γ_γ (ϕ/π = 0.5) limits are adiabatically connected as shown in Fig. <ref>b. Thus we find evidence for an extended quantum spin liquid phase in the presence of anisotropy, a_K.
2) An intervening magnetically ordered phase separates the spin liquid phase near the pure Γ_γ limit and the antiferromagnetic KSL at ϕ/π=1.
3) The specific heat C(T) and entropy S(T) at finite temperatures
suggest a smooth crossover from the ferromagnetic Kitaev limit to the pure Γ_γ limit, consistent with our ED results.
4) Zig-zag spin correlations become dominant upon perturbing the quantum spin liquid phase in 0 < ϕ < π/2 by J_3, indicating the enhancement of zig-zag order by J_3.
Results
Extended spin liquid state in global phase diagram —
The ground state energy per site E_0/N of Eq. (<ref>) was computed for ϕ/π∈ [0,1], and for different anisotropy parameters
by ED on a 24-site cluster using periodic boundary conditions (see SM).
Discontinuities in 1/N∂ E_0/∂ϕ were used to identify possible phase transitions.
Remarkably, when ϕ/π∈ [0,0.5] and 0 ≤ a_K < 0.06, there is a line of first order phase transitions that terminate at a_K = 0.06. Above a_K = 0.06, the first derivative of the energy presents no sharp features suggesting that the ground state of the Γ-limit (ϕ/π=0.5) is adiabatically connected to the ferromagnetic Kitaev spin liquid (ϕ/π=0) as depicted in Fig. <ref>b for a_K = 0.1.
In the antiferromagnetic region of phase space, there are two large discontinuities in 1/N∂ E_0/∂ϕ that encompass a large region of phase space separating the Γ-limit and the exactly solvable antiferromagnetic Kitaev limit at ϕ/π = 1.
These peaks coincide with kinks in E_0/N (solid yellow) shown in Fig. <ref>a.
Two smaller discontinuities can also be seen near ϕ/π = 0.75, however these are not present when a_K=0, while the larger jumps near ϕ/π = 0.5 and 1 appear consistently for different a_K.
The small discontinuities can thus be considered spurious and a consequence of the finite cluster size.
Similar finite size effects were also found for ϕ/π∈ [0,0.5] when a_K=0, as discussed in the SM.
Magnetic order and perturbations — The ground state wavefunction of Eq. (<ref>) computed by ED is used to evaluate real-space spin-spin correlation functions ⟨ S_i· S_j⟩, where i and j are site indices on the honeycomb lattice.
By Fourier transform, we obtain the static structure factor (SSF) given by S_q = 1/N∑_i,j e^i(r_i-r_j)·q⟨ S_i· S_j⟩ where q is a vector in the reciprocal lattice.
The SSF at various points in the Brillouin zone (BZ) is plotted over the phase space in the bottom panel of Fig. <ref>a.
The discontinuities in the SSF can be directly matched with those in 1/N∂ E_0/∂ϕ.
Visualizations of the SSF over the BZ for representative ϕ in the phase diagram are presented in Fig. <ref>b.
The SSF in Fig. <ref>b is obtained by computing the average of ⟨ S_i· S_j⟩ over all n.n. bonds, 2nd n.n., etc.
This calculation reflects the presence of different domains in the crystal, in which either of x,y,z bond interactions can be stronger and thus, over the whole crystal, these domains result in an isotropic S_q despite the inherent bond anistropy in Eq. (<ref>).
The SSF varies adiabatically when a_K=0.1 for ϕ∈ [0,π/2] and the spin correlations at the Γ- and M-points are comparable in intensity when Γ≃ K_γ, leading to a “star”-shaped structure in the SSF as seen in Fig. <ref>b (e.g, ϕ/π = 0.2).
The extended phase separating ϕ/π=0.5 and 1 is characterized by dominating spin correlations at the K- and Γ^'-points in the reciprocal lattice (ϕ/π = 0.75 in Fig. <ref>b).
Contained within this phase is the exactly solvable point with hidden SU(2) symmetry at ϕ/π=0.75 which features K-point correlations<cit.> consistent with the results presented here.
Thus the extended spin liquid phase for ferromagnetic K is separated from the antiferromagnetic KSL at ϕ/π = 1 by a magnetically ordered phase.
These results can be connected to real materials, particularly RuCl_3 in which a zig-zag magnetic ordering has been observed<cit.>.
Previous studies have shown that in addition to the n.n. ferromagnetic Kitaev and antiferromagnetic Γ interactions, a 3rd n.n. antiferromagnetic Heisenberg interaction J_3∑_⟨⟨⟨ i,j⟩⟩⟩ S_i · S_j is non-vanishing and plays a role in determining the magnetic ordering in RuCl_3<cit.>.
Fig. <ref> shows that the effect of perturbing Eq. (<ref>) by J_3 is to enhance (suppress) the M-point (Γ-point) spin correlations, consistent with a zig-zag magnetically ordered state observed in experiments.
This result indicates that by tuning J_3 in the real material, an alternate path to achieve a spin liquid phase may be realized.
Specific heat and thermal entropy — Previous study on the finite temperature properties of the Kitaev model has shown that Kitaev spin liquids feature two peaks in the heat capacity C(T) and a 1/2-plateau in the entropy S(T), which is attributed to the thermal fractionalization of spin degrees of freedom<cit.>.
Here we go beyond the Kitaev limit and investigate the heat capacity and entropy at finite temperature in the presence of Γ, which is expected to compete with K_γ in RuCl_3, using the method of thermal pure quantum states (see SM).
The dependence of C(T) on ϕ when a_K=0.1 is plotted in the top panel of Fig. <ref>. The expected two peak structure in C(T) is observed when ϕ/π = 0, and is seen to be maintained continuously as ϕ/π approaches 0.5 so that the Γ-limit shows a qualitatively similar behaviour in C(T) to the Kitaev spin liquid.
Evidence for a phase transition can be seen when ϕ≳ 0.7 on account of the abrupt change in C(T), resembling that of the heat capacity in trivially ordered phases <cit.>. This finding is consistent with our ED results and the 120-order at ϕ/π=0.75 seen in Refs. Rau2014 and Chaloupka2015.
The dependence of S(T) on ϕ is plotted in the bottom panel of Fig. <ref> with a clear 1/2-plateau observed when ϕ/π = 0, consistent with the expected Kitaev spin liquid behaviour. In addition, a plateau of about 1/5 the total entropy is observed when ϕ/π = 0.5. Another plateau is observed in the magnetically ordered phase around ϕ/π = 0.75; however, this feature can be attributed to finite-size effects as follows. The (N+1)-fold ground state degeneracy at ϕ/π = 0.75 due to the hidden SU(2) symmetry<cit.> is only slightly broken away from this point, inducing a plateau in S(T) with height given by ln (N+1)/N ln 2 ≃ 0.1935 ∼ 1/5 when N=24. By contrast, the height of the plateau around ϕ/π = 0.5 is independent of N<cit.>.
The physical origin of the two peak structure in C(T) and the plateau in S(T) can be traced to the energy scales of the thermal fluctuations of the underlying quasiparticles in the spin liquid <cit.>.
In the Kitaev spin liquid at zero temperature, the low-lying quasiparticle excitations are characterized by itinerant Majorana fermions which disperse in a background of zero flux<cit.>.
It has been shown that as temperature increases, the flux degrees of freedom begin to fluctuate and lead to the low temperature peak seen in C(T), resulting in the plateau seen in S(T).
Furthermore, the high temperature peak in C(T) is attributed to the development of short range spin correlations<cit.>.
Our results show that the two-peak structure in C(T)
is qualitatively maintained and further suggests that no phase transition has taken place.
Similarities on the infinite tree —
We further studied Eq. (<ref>) on an infinite Cayley tree with z=3 connectivity, using the infinite time-evolving block decimation algorithm<cit.> (iTEBD; see SM).
Classically, the ground state in the Γ-limit on the infinite tree is macroscopically degenerate because
a different state with the same energy can be constructed by flipping the sign of one spin component on an infinite string of neighboring spins.
The Γ-limit on the two-dimensional (2D) honeycomb and three-dimensional (3D) hyper-honeycomb<cit.> lattices also feature similar classical degeneracy.
The similarity at the classical level of the Γ-limit on the infinite tree to the 2D and 3D lattices prompts us to study the quantum model on the infinite tree for further insight.
Figure <ref> shows results of the eight-site iTEBD calculation with bond dimension χ=10, and anisotropy a_K=0.1.
In this calculation, we have also introduced an anisotropy to Γ_γ such that Γ_x = Γ_y = (1-a_Γ)sinϕ and Γ_z = (1+2a_Γ)sinϕ in order to apply the iTEBD method (see SM).
No transition is found when ϕ/π∈ [0,0.5] and
the obtained state is a highly entangled paramagnet, with S_E∼ 0.8 for strong (z) bonds, while for weak (x,y) bonds, S_E ∼ 0.4.
Deep in the gapped phase of the Kitaev model, with large anisotropy a_K, one finds S_E∼log 2∼ 0.693 for the strong bonds and much smaller values of S_E for the weak bonds.
Both, however, increase as the anisotropy is reduced - perhaps due to a finite contributions from the Majorana fermions<cit.>.
An increase is expected in the entanglement entropy as one approaches a phase transition,
however no such peaks are seen for 0<ϕ<π/2.
Similarly, there are no sharp features in the ground state energy E_0 as a function of ϕ, which indicates that this phase is adiabatically connected to the Kitaev spin liquid at ϕ=0.
There is an apparent first order transition around ϕ=0.6π into a Néel state with spins ordered in the [111] direction, accompanied by a dramatic lowering of S_E on both strong and weak bonds into this region.
This Néel state becomes a simple product state when ϕ/π=0.75, as seen by the vanishing of S_E.
A final transition into a paramagnetic state is seen before the antiferromagnetic Kitaev limit.
Discussion
The highlight of our numerical results is that, in the presence of bond anisotropy a_K, there exists an extended quantum spin liquid region which is adiabatically connected to the ferromagnetic KSL.
The model we have studied is motivated by experiments on RuCl_3 and earlier ab initio computations<cit.>.
In a recent inelastic neutron scattering experiment on RuCl_3, it is found that the continuum of finite energy excitations exists both below and above the magnetic transition temperature despite that the low temperature ground state is the zig-zag long-range ordered state<cit.>.
The inelastic neutron scattering data for the continuum show the “star”-shape intensity that extends from the zone center towards the M points of the Brillouin zone.
Recall that the static structure factor in our ED study shows enhanced (decreased) short-range spin correlations at the M point (zone center) of the Brillouin zone as one moves from the ferromagnetic Kitaev limit to the pure Γ_γ limit. When the strength of the ferromagnetic Kitaev interaction and the Γ_γ interaction become comparable, both of the short-range spin correlations at the M and the zone center would show significant intensity, which leads to the “star”-shape structure in momentum space.
This behavior may be favorably compared to the finite-energy short-range spin correlations seen in RuCl_3.
Given that the ab initio computations suggest comparable magnitudes of the ferromagnetic Kitaev and Γ_γ interactions in RuCl_3<cit.>, it is conceivable that RuCl_3 may be very close to the quantum spin liquid phase found in our model and, as shown in our work, the introduction of small J_3 would favor the zig-zag magnetically ordered phase observed in RuCl_3.
Finally, more analytical understanding of the connection between the pure Kitaev limit and the quantum spin liquid phases identified in our numerical work would be extremely valuable for future applications on real materials.
Note that a possible incommensurate magnetic ordering cannot be ruled out due to finite cluster size.
However, based on the results of a recent iDMRG study<cit.>, no evidence of incommensurate ordering allowed by their momentum cuts was found.
It is also interesting to note that quantum fluctuations do not lift the infinite ground state degeneracy of the classical model for positive Γ, while they may lead to incommensurate ordering for negative Γ<cit.>.
Thus it is likely that the positive Γ regime studied here possesses a spin liquid ground state, and the precise nature of the spin liquid is an excellent topic for future study.
Methods
Our results were obtained using the combination of the three independent numerical techniques listed below.
Exact diagonalization —
ED was performed on a 24 site cluster with periodic boundary conditions.
This cluster allows all the symmetries present in the infinite honeycomb lattice and has been used reliably in previous related classical and quantum studies <cit.>.
The Hamiltonian given by Eq. (1) in the main text does not have the U(1) symmetry associated with S^z conservation, making it impossible to block diagonalize into magnetization sectors.
Therefore, the translational symmetry of the 24 site cluster was used to block diagonalize into different momentum sectors to gain more information about its energy spectrum.
The lowest energies and corresponding wavefunctions of each block were then numerically obtained using the Lanczos method. Further details and calculations can be found in the SM.
Thermal pure quantum states — We used the method of thermal pure quantum states<cit.> in our specific heat and thermal entropy calculations. Details about the construction of thermal pure quantum states and the subsequent calculation of specific heat and entropy can be found in the SM.
Infinite time-evolving block decimation algorithm — The Hamiltonian given by Eq. (<ref>) was studied on an infinite Cayley tree with z = 3 connectivity using the infinite time-evolving block decimation algorithm<cit.>. Details about the method and the construction of the ground state can be found in the SM.
Acknowledgements
This work was supported by the NSERC of Canada and the Center for Quantum Materials at the University of Toronto. Y. Y. was supported by JSPS KAKENHI (Grant Nos. 15K17702 and 16H06345) and was supported by PRESTO, JST. Y. Y. was also supported in part by MEXT as a social and scientific priority issue (Creation of new functional devices and high-performance materials to support next-generation industries) to be tackled by using post-K computer. Computations were mainly performed on the GPC supercomputer at the SciNet HPC Consortium. SciNet is funded by: the Canada Foundation for Innovation under the auspices of Compute Canada; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. A part of the TPQ results were checked by a program package, HΦ<cit.>. We thank helpful discussions with Frank Pollmann, Matthias Gohlke, Shunsuke Furukawa, and Subhro Bhattacharjee. We particularly thank Natalia Perkins and Ioannis Rousochatzakis for informing us of their unpublished ED results on related models.
Contributions
A.C. and Y.Y. performed the exact diagonalization calculations. Y.Y. performed the thermal pure quantum states calculations. G.W. performed the iTEBD calculations. H.-Y.K. and Y.B.K. supervised the study. All authors contributed to the writing of the manuscript.
Competing Interests
The authors declare no competing financial or non-financial interests.
Data Availability
All relevant data is available from the corresponding author.
38
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Balents(2010)]Balents2010
author author L. Balents, @noop title title Spin liquids in frustrated magnets, journal journal
Nature volume 464, pages 199
(year 2010)NoStop
[Shimizu et al.(2003)Shimizu, Miyagawa, Kanoda, Maesato, and Saito]Shimizu2003
author author Y. Shimizu, author K. Miyagawa,
author K. Kanoda, author M. Maesato, and author G. Saito, @noop
title title Spin Liquid State in an Organic Mott Insulator with a Triangular Lattice, journal journal Phys. Rev. Lett. volume 91, pages 107001 (year
2003)NoStop
[Helton et al.(2007)Helton,
Matan, Shores, Nytko,
Bartlett, Yoshida, Takano,
Suslov, Qiu, Chung,
Nocera, and Lee]Helton2007
author author J. S. Helton, author K. Matan,
author M. P. Shores, author E. A. Nytko, author
B. M. Bartlett, author
Y. Yoshida, author Y. Takano, author A. Suslov, author Y. Qiu, author J.-H. Chung, author D. G. Nocera, and author Y. S. Lee, @noop title title Spin Dynamics of the Spin-1/2 Kagome Lattice Antiferromagnet ZnCu_3(OH)_6Cl_2, journal journal Phys.
Rev. Lett. volume 98, pages 107204
(year 2007)NoStop
[Okamoto et al.(2007)Okamoto, Nohara, Aruga-Katori, and Takagi]Okamoto2007
author author Y. Okamoto, author M. Nohara,
author H. Aruga-Katori, and author H. Takagi, @noop
title title Spin-Liquid State in the S = 1/2 Hyperkagome Antiferromagnet Na_4Ir_3O_8, journal journal Phys. Rev. Lett. volume 99, pages 137207 (year
2007)NoStop
[Yamashita et al.(2010)Yamashita, Nakata, Senshu, Nagata, Yamamoto, Kato, Shibauchi, and Matsuda]MYamashita2010
author author M. Yamashita, author N. Nakata,
author Y. Senshu, author M. Nagata, author
H. M. Yamamoto, author
R. Kato, author T. Shibauchi, and author Y. Matsuda, @noop title title Highly Mobile Gapless Excitations in a Two-Dimensional Candidate Quantum Spin Liquid, journal journal Science volume 328, pages
1246 (year 2010)NoStop
[Han et al.(2012)Han,
Helton, Chu, Nocera,
Rodriguez-Rivera, Broholm, and Lee]Han2012
author author T.-H. Han, author J. S. Helton,
author S. Chu, author
D. G. Nocera, author
J. A. Rodriguez-Rivera, author
C. Broholm, and author
Y. S. Lee, @noop title title Fractionalized excitations in the spin-liquid state of a kagome-lattice antiferromagnet, journal journal Nature volume 492, pages 406 (year 2012)NoStop
[Jackeli and Khaliullin(2009)]Jackeli2009
author author G. Jackeli and author G. Khaliullin, 10.1103/PhysRevLett.102.017205 title title Mott Insulators in the Strong Spin-Orbit Coupling Limit: From Heisenberg to a Quantum Compass and Kitaev Models, journal journal Phys. Rev. Lett. volume 102, pages 017205 (year
2009)NoStop
[Witczak-Krempa et al.(2013)Witczak-Krempa, Chen, Kim, and Balents]WCKB2013
author author W. Witczak-Krempa, author G. Chen, author Y. B. Kim, and author L. Balents, 10.1146/annurev-conmatphys-020911-125138 title title Correlated Quantum Phenomena in the Strong Spin-Orbit Regime, journal
journal Annual Review of Condensed Matter Physics volume 5, pages 57 (year
2013)NoStop
[Rau et al.(2016)Rau,
Lee, and Kee]RLK2016
author author J. G. Rau, author E. K.-H. Lee, and author H.-Y. Kee, 10.1146/annurev-conmatphys-031115-011319 title title Spin-Orbit Physics Giving Rise to Novel Phases in Correlated Systems: Iridates and Related Materials, journal
journal Annual Review of Condensed Matter Physics volume 7, pages 195 (year
2016)NoStop
[Singh et al.(2012)Singh,
Manni, Reuther, Berlijn,
Thomale, Ku, Trebst, and Gegenwart]Singh2012
author author Y. Singh, author S. Manni,
author J. Reuther, author T. Berlijn, author
R. Thomale, author W. Ku, author S. Trebst, and author P. Gegenwart, 10.1103/PhysRevLett.108.127203 title title Relevance of the Heisenberg-Kitaev Model for the Honeycomb Lattice Iridates A_2IrO_3, journal journal Phys. Rev. Lett. volume 108, pages 127203 (year
2012)NoStop
[Plumb et al.(2014)Plumb,
Clancy, Sandilands, Vijay Shankar, Hu, Burch, Kee, and Kim]Plumb2014
author author K. Plumb, author J. Clancy,
author L. Sandilands, author V. Vijay Shankar, author
Y. Hu, author K. Burch, author H.-Y. Kee, and author Y.-J. Kim, 10.1103/PhysRevB.90.041112 title title α-RuCl_3: A spin-orbit assisted Mott insulator on a honeycomb lattice, journal journal Phys. Rev. B volume
90, pages 041112 (year 2014)NoStop
[Modic et al.(2014)Modic,
Smidt, Kimchi, Breznay,
Biffin, Choi, Johnson,
Coldea, Watkins-Curry, McCandless, Chan, Gandara, Islam, Vishwanath, Shekhter, McDonald, and Analytis]Modic2014
author author K. A. Modic, author T. E. Smidt,
author I. Kimchi, author N. P. Breznay, author
A. Biffin, author S. Choi, author R. D. Johnson, author R. Coldea, author P. Watkins-Curry, author G. T. McCandless, author J. Y. Chan, author F. Gandara,
author Z. Islam, author A. Vishwanath, author
A. Shekhter, author
R. D. McDonald, and author
J. G. Analytis, @noop title title Realization of a three-dimensional spin–anisotropic harmonic honeycomb iridate, journal journal Nature Communications volume 5, pages 4203 (year 2014)NoStop
[Takayama et al.(2015)Takayama, Kato, Dinnebier, Nuss, Kono, Veiga, Fabbris,
Haskel, and Takagi]Takayama2015
author author T. Takayama, author A. Kato,
author R. Dinnebier, author J. Nuss, author
H. Kono, author L. Veiga, author G. Fabbris, author D. Haskel, and author H. Takagi, @noop title title Hyperhoneycomb Iridate β-Li_2IrO_3 as a Platform for Kitaev Magnetism, journal journal Phys. Rev. Lett. volume 114, pages 077202 (year 2015)NoStop
[Rau et al.(2014)Rau,
Lee, and Kee]Rau2014
author author J. G. Rau, author E. K.-H. Lee, and author H.-Y. Kee, 10.1103/PhysRevLett.112.077204 title title Generic Spin Model for the Honeycomb Iridates beyond the Kitaev Limit, journal journal Phys. Rev. Lett. volume 112, pages 077204 (year 2014)NoStop
[Ran et al.(2017)]Ran2017
author author K. Ran, author J. Wang, author W. Wang, author Z.-Y. Dong, author X. Ren, author S. Bao, author S. Li, author Z. Ma, author Y. Gan, author Y. Zhang, author J. T. Park, author G. Deng, author S. Danilkin, author S.-L. Yu, author J.-X. Li, author J. Wen, 10.1103/PhysRevLett.118.107203 title title Spin-Wave Excitations Evidencing the Kitaev Interaction in Single Crystalline α-RuCl_3, journal journal Phys. Rev. Lett. volume 118, pages 107203 (year 2017)NoStop
[Wang et al.(2017)]Wang2017
author author W. Wang, author Z.-Y. Dong, author S.-L. Yu, author J.-X. Li, 10.1103/PhysRevB.96.115103 title title Theoretical investigation of magnetic dynamics in α-RuCl_3, journal journal Phys. Rev. B volume 96, pages 115103 (year 2017)NoStop
[Kim and Kee(2016)]Kim2016
author author H.-S. Kim and author H.-Y. Kee, 10.1103/PhysRevB.93.155143 title title Crystal structure and magnetism in α-RuCl_3: An ab initio study, journal journal Phys. Rev. B volume 93, pages 155143 (year 2016)NoStop
[Winter et al.(2016)Winter,
Li, Jeschke, and Valenti]Winter2016
author author S. M. Winter, author Y. Li, author H. O. Jeschke, and author R. Valenti, @noop
title title Challenges in design of Kitaev materials: Magnetic interactions from competing energy scales, journal journal Phys. Rev. B volume 93, pages 214431 (year
2016)NoStop
[Yadav et al.(2016)Yadav,
Bogdanov, Katukuri, Nishimoto, van den Brink, and Hozoic]Yadav2016
author author R. Yadav, author N. A. Bogdanov,
author V. M. Katukuri, author S. Nishimoto, author
J. van den Brink, and author
L. Hozoic, @noop title title Kitaev exchange and field-induced quantum spin-liquid states in honeycomb α-RuCl_3, journal journal Scientific Reports volume 6, pages 37925 (year
2016)NoStop
[Rau and Kee(2014)]RauKeeTD
author author J. G. Rau and author H.-Y. Kee, https://arxiv.org/abs/1408.4811 title title Trigonal distortion in the honeycomb iridates: Proximity of zigzag and spiral phases in Na_2IrO_3, journal journal arXiv preprint arXiv:1408.4811 (year
2014)NoStop
[Rousochatzakis and Perkins(2017)]rousochatzakis_classical_2016
author author I. Rousochatzakis and author N. B. Perkins, @noop title title Classical Spin Liquid Instability Driven By Off-Diagonal Exchange in Strong Spin-Orbit Magnets, journal journal
Phys. Rev. Lett. volume 118, pages 147204
(year 2018)NoStop
[Gohlke(2017)]Gohlke2017
author author M. Gohlke, author G. Wachtel, author Y. Yamaji, author F. Pollmann, author Y. B. Kim,
title title Quantum spin liquid signatures in Kitaev-like frustrated magnets, journal journal Phys. Rev. B volume 97, pages 075126
(year 2018)NoStop
[Imada and Takahashi(1986)]JPSJ.55.3354
author author M. Imada and author M. Takahashi, 10.1143/JPSJ.55.3354 title title Quantum Transfer Monte Carlo Method for Finite Temperature Properties and Quantum Molecular Dynamics Method for Dynamical Correlation Functions, journal
journal J. Phys. Soc. Jpn. volume
55, pages 3354 (year 1986)NoStop
[Jaklic and Prelovsek(1994)]PhysRevB.49.5065
author author J. Jaklic and author P. Prelovsek, @noop title title Lanczos method for the calculation of finite-temperature quantities in correlated systems, journal journal
Phys. Rev. B volume 49, pages 5065
(year 1994)NoStop
[Hams and De Raedt(2000)]PhysRevE.62.4365
author author A. Hams and author H. De Raedt, @noop title title Fast algorithm for finding the eigenvalue distribution of very large matrices, journal journal
Phys. Rev. E volume 62, pages 4365
(year 2000)NoStop
[Sugiura and Shimizu(2012)]SSugiura2012
author author S. Sugiura and author A. Shimizu, @noop title title Thermal Pure Quantum States at Finite Temperature, journal journal Phys. Rev. Lett. volume 108, pages 240401
(year 2012)NoStop
[Sugiura and Shimizu(2013)]SSugiura2013
author author S. Sugiura and author A. Shimizu, @noop title title Canonical Thermal Pure Quantum State, journal journal Phys.
Rev. Lett. volume 111, pages 010401
(year 2013)NoStop
[Chaloupka and Khaliullin(2015)]Chaloupka2015
author author J. Chaloupka and author G. Khaliullin, @noop title title Hidden symmetries of the extended Kitaev-Heisenberg model: Implications for the honeycomb-lattice iridates A_2IrO_3, journal journal
Phys. Rev. B volume 92, pages 024413
(year 2015)NoStop
[Sears et al.(2015)Sears,
Songvilay, Plumb, Clancy,
Qiu, Zhao, Parshall, and Kim]Jennifer2015
author author J. A. Sears, author M. Songvilay,
author K. W. Plumb, author J. P. Clancy, author
Y. Qiu, author Y. Zhao, author D. Parshall, and author Y.-J. Kim, 10.1103/PhysRevB.91.144420 title title Magnetic order in α-RuCl_3: A honeycomb-lattice quantum magnet with strong spin-orbit coupling, journal
journal Phys. Rev. B volume 91, pages 144420 (year 2015)NoStop
[Johnson et al.(2015)Johnson, Williams, Haghighirad,
Singleton, Zapf, Manuel,
Mazin, Li, Jeschke,
Valentí, and Coldea]Johnson2015
author author R. D. Johnson, author S. C. Williams, author A. A. Haghighirad, author J. Singleton, author V. Zapf,
author P. Manuel, author I. I. Mazin, author
Y. Li, author H. O. Jeschke, author R. Valentí, and author R. Coldea, 10.1103/PhysRevB.92.235119
title title Monoclinic crystal structure of α-RuCl_3 and the zigzag antiferromagnetic ground state, journal journal Phys. Rev. B volume 92, pages 235119 (year
2015)NoStop
[Cao et al.(2016)Cao,
Banerjee, Yan, Bridges,
Lumsden, Mandrus, Tennant,
Chakoumakos, and Nagler]Cao2016
author author H. Cao, author A. Banerjee,
author J.-Q. Yan, author C. Bridges, author
M. Lumsden, author D. Mandrus, author D. Tennant, author B. Chakoumakos, and author S. Nagler, @noop title title Low-temperature crystal and magnetic structure of α-RuCl_3, journal journal Phys. Rev. B volume 93, pages 134423 (year 2016)NoStop
[Nasu et al.(2015)Nasu,
Udagama, and Motome]JNasu2015
author author J. Nasu, author M. Udagama, and author Y. Motome, @noop title title Thermal fractionalization of quantum spins in a Kitaev model: Temperature-linear specific heat and coherent transport of Majorana fermions, journal journal Phys. Rev. B volume 92, pages 115122 (year 2015)NoStop
[Yamaji et al.(2016)Yamaji,
Suzuki, Yamada, Suga,
Kawashima, and Imada]PhysRevB.93.174425
author author Y. Yamaji, author T. Suzuki,
author T. Yamada, author S.-i. Suga, author
N. Kawashima, and author
M. Imada, @noop title title Clues and criteria for designing a Kitaev spin liquid revealed by thermal and spin excitations of the honeycomb iridate Na_2IrO_3, journal journal Phys. Rev. B volume
93, pages 174425 (year 2016)NoStop
[Not()]Note1
@noop note The height of the plateau in the temperature
dependence of entropy is also examined by using a 32 site
cluster.Stop
[Kitaev(2006)]Kitaev2006
author author A. Kitaev, 10.1016/j.aop.2005.10.005 title title Anyons in an exactly solved model and beyond, journal
journal Ann. Phys. volume 321, pages 2 (year 2006)NoStop
[Knolle et al.(2014)Knolle,
Kovrizhin, Chalker, and Moessner]Knolle2014
author author J. Knolle, author D. Kovrizhin,
author J. Chalker, and author R. Moessner, @noop
title title Dynamics of a Two-Dimensional Quantum Spin Liquid: Signatures of Emergent Majorana Fermions and Fluxes, journal journal Phys. Rev. Lett. volume 112, pages 207203 (year
2014)NoStop
[Vidal(2007)]vidal_classical_2007
author author G. Vidal, 10.1103/PhysRevLett.98.070201 title title Classical Simulation of Infinite-Size Quantum Lattice Systems in One Spatial Dimension, journal journal Phys. Rev. Lett. volume 98, pages 070201 (year
2007)NoStop
[Kimchi et al.(2014)Kimchi,
Analytis, and Vishwanath]kimchi_three-dimensional_2014
author author I. Kimchi, author J. G. Analytis, and author A. Vishwanath, 10.1103/PhysRevB.90.205126 title title Three-dimensional quantum spin liquids in models of harmonic-honeycomb iridates and phase diagram in an infinite-D approximation, journal journal Phys. Rev. B volume
90, pages 205126 (year 2014)NoStop
[Banerjee et al.(2016)Banerjee, Yan, Knolle, Bridges, Stone, Lumsden, Mandrus, Tennant, Moessner, and Nagler]Banerjee2016
author author A. Banerjee, author J. Yan,
author J. Knolle, author C. A. Bridges, author
M. B. Stone, author
M. D. Lumsden, author
D. G. Mandrus, author
D. A. Tennant, author
R. Moessner, and author
S. E. Nagler, title title Neutron scattering in the proximate quantum spin liquid α-RuCl_3, journal journal
Science volume
356, pages 6342 (year 2017)NoStop
[Chaloupka et al.(2010)Chaloupka, Jackeli, and Khaliullin]Chaloupka2010
author author J. Chaloupka, author G. Jackeli,
and author G. Khaliullin, @noop title title Kitaev-Heisenberg Model on a Honeycomb Lattice: Possible Exotic Phases in Iridium Oxides A_2IrO_3, journal journal Phys. Rev. Lett. volume 105, pages 027204 (year 2010)NoStop
[Kawamura et al.(2017)Kawamura, Yoshimi, Misawa, Yamaji, Todo, and Kawashima]Kawamura2017180
author author M. Kawamura, author K. Yoshimi,
author T. Misawa, author Y. Yamaji, author
S. Todo, and author
N. Kawashima, @noop title title Quantum lattice model solver ℋΦ, journal journal Computer Physics Communications volume 217, pages 180 (year
2017)NoStop
§ SUPPLEMENTARY MATERIALS
§ EXACT DIAGONALIZATION
§.§ Ground state energy and S_q in the a_K=0 limit
When the Kitaev exchange is isotropic a_K=0, three discontinuities in 1/N∂ E_0/∂ϕ, which are absent with slight anisotropy a_K=0.1, can be seen when ϕ/π∈ [0,0.5].
These discontinuities are reflected in S_q as can be seen in Fig. <ref>.
The general qualitative picture, however, is the same as with slight anistropy: comparable Γ- and M-point spin correlations when ϕ/π∈ [0,0.5] lead to a “star”-shaped pattern in S_q in the Brillouin zone (BZ).
As when a_K=0.1, two large discontinuities in 1/N∂ E_0/∂ϕ and a dramatic change in S_q when K^γ > 0 are evidence of an extended magnetically ordered phase with dominating K- and Γ^'-point correlations.
The inconsistent appearance of the other discontinuities in 1/N∂ E_0/∂ϕ (and associated small discontinuities in S_q) with changing anisotropy parameter are likely finite-size effect manifestations. Indeed, the location of these small discontinuities depends on system size (N = 18, 24).
§.§ Phase diagram for various a_K ≠ 0
We investigated the phase diagram between ϕ/π = 0 and ϕ/π = 0.5 for different a_K as shown in Fig. <ref> calculated by ED. The discontinuities seen in ∂ E_0/∂ϕ when a_K = 0 disappear for a_K ≥ 0.06. The smooth connection between ϕ/π = 0 and ϕ/π = 0.5 limits is found to be robust for a wide range of a_K.
§.§ Finite size spectrum
The Hamiltonian was block diagonalized in terms of eigenvalues of the translation operators T_1,2 acting on the spin configuration on the 24 site cluster. In this notation, any spin configuration in the S_z basis, |s⟩≡ |S_1^z… S_24^z⟩, on the 24 site cluster can be translated as T_1^nT_2^m|s⟩ where n = 0, ... 5 and m = 0,1. Periodic boundary conditions are imposed such that T_1^6 = 1 and T_2^2 = 1. Thus there are 12 unique eigenvalues of T_1^nT_2^m (there are two atoms in the unit cell) of the form e^2π i (k_x/N_1 + k_y/N_2), with N_1 = 6 and N_2 = 2 on the 24-site cluster. The momentum eigenvalues are labeled by the integer values k_x and k_y.
For all ϕ, the ground state is in the q ≡ (k_x,k_y) = 0 momentum sector.
The energy spectra corresponding to ϕ/π = 0.5 and 0 are shown in Fig. <ref>a.
In the Γ-only limit, the ground state is accompanied close in energy by 3 excited states in the q = 0 sector, the first excited state being doubly degenerate.
These states are separated by a gap from the rest of the spectrum.
The Kitaev limit hosts a doubly degenerate ground state within ED.
It is found that when varying ϕ and keeping the Kitaev term isotropic, the four states in the q=0 sector experience level crossing, shown in Fig. <ref>b, and account for the first-order phase transitions found when ϕ∈ [0,π/2] seen in Fig. <ref>.
The location of these level crossings when ϕ∈ [0,π/2] depends on system size (N = 18, 24) and are thus attributed to finite size effect.
By contrast, there is no level crossing when introducing small anisotropy a_K=0.1 to the Kitaev exchange.
The energy spectrum surrounding the exactly solvable point ϕ/π = 0.75 tends to be degenerate and is qualitatively different from the spectrum in the rest of the phase space.
This result serves as a way to contrast the energy spectrum of the magnetically ordered phase with the rest of the phase diagram.
§.§ J_3 perturbation
Under the effect of a 3rd n.n. J_3 > 0, the intensity in S_q at the M-points in the BZ was shown in the main text to be enhanced.
This result suggests that a phase transition into the experimentally observed zig-zag magnetically ordered state could occur for sufficiently large J_3 > 0.
Fig. <ref> shows signatures of a possible phase transition when investigating the second derivative of the energy (dashed lines) and fidelity susceptibility<cit.> (solid lines) χ_F when ϕ/π = 0.5, 0.1 and 0.2.
The broad peaks in both χ_F and -1/N∂^2E_0/∂ϕ^2 could be a result of finite size effect, and makes it difficult to estimate the necessary J_3 that would induce a transition into the zig-zag ordered phase.
§ THERMAL PURE QUANTUM STATES
To capture thermal excitations of Eq. (1) in the main text, and examine how the Kitaev spin liquid is deformed in the presence of finite Γ, we systematically calculate heat capacity by using typical pure states<cit.>.
Several pioneering studies <cit.> have shown that averages over a few tens of pure states can replace a canonical ensemble.
Recently, a statistical mechanics proof for the replacement of a canonical ensemble by a single pure state was given in Refs.SSugiura2012,SSugiura2013, where such a pure state that replaces a canonical ensemble is called a thermal pure quantum (TPQ) state<cit.>.
We briefly summarize the construction of TPQ states following Ref. SSugiura2012.
A TPQ state of N quantum S=1/2 spins at infinite temperatures is simply given by a random vector,
|ϕ_+∞⟩=∑_i=0^2^N-1c_i|i⟩,
where |i⟩ is represented by the real-space S=1/2 basis and specified by a binary representation of a decimal number and {c_i} is a set of random complex numbers with the normalization condition ∑_i=0^2^N-1 |c_i|^2=1.
Then, by multiplying a wave vector by the hamiltonian Ĥ, we construct the TPQ states at lower temperatures as follows: Starting with an initial (or the 0-th step) vector |Φ_0⟩=|ϕ_+∞⟩, the k-th step vector |Φ_k⟩ (k≥ 1) is constructed as
|Φ_k⟩=(Λ-Ĥ)|Φ_k-1⟩/√(⟨Φ_k-1|(Λ-Ĥ)^2|Φ_k-1⟩),
where Λ is a constant larger than the largest eigenvalue of Ĥ.
The above k-th step vector is a TPQ state at a finite temperature T.
The corresponding inverse temperature β=(k_BT)^-1 is determined through the following formula <cit.>,
β=2k_B k/Λ-⟨Φ_k|Ĥ|Φ_k⟩+O(1/N),
where k_B is the Boltzmann constant.
In other word, a TPQ state at temperatures T is given as,
|ϕ_T⟩=|Φ_k⟩.
The heat capacity and entropy of Ĥ are then estimated by using a TPQ state |ϕ_T⟩.
In thermodynamic limit, a single TPQ state |ϕ_T⟩ indeed replaces a canonical ensemble and gives us exact heat capacity and entropy.
However, for finite N, contribution from atypical states is not negligible.
Therefore, in the present study, we estimate errors that originate from the atypical states by
using standard deviation of the results obtained from several initial random vectors.
Here, we calculate the heat capacity C by using fluctuations of internal energy.
Ideally, the following equation holds:
C(T)=k_ B⟨ϕ_T|Ĥ^2|ϕ_T⟩-⟨ϕ_T|Ĥ|ϕ_T⟩^2/(k_ BT)^2.
The entropy S is estimated by integrating C/T from high temperatures as
S=Nk_Bln2-∫_T^+∞dT' C/T'.
§ ITEBD METHOD
We also studied Eq. (1) in the main text on an infinite Cayley tree with z=3 connectivity, using the infinite time-evolving block decimation algorithm<cit.> (iTEBD).
Within this approach, the ground state is approximated by a tensor product state (TPS) defined on an infinite tree lattice.
A Cayley tree TPS |ψ⟩ may be defined by site-tensors T_i_rj_rk_r^(r),s_r, and diagonal bond-matrices λ_i_ri_r'^⟨ rr'⟩.
Here r,r' denote sites, s_r=↑,↓ is a site's spin index, and i_r,j_r,k_r are its bond indices, for the x,y,z bonds, respectively, each running up to the bond dimension χ.
The TPS is formally given by
|ψ⟩ = ∏_r∑_i_rj_rk_rs_rT_i_rj_rk_r^(r),s_r
×∏_⟨ rr'⟩∈ xλ_i_ri_r'^⟨ rr'⟩∏_⟨ rr'⟩∈ yλ_j_rj_r'^⟨ rr'⟩∏_⟨ rr'⟩∈ zλ_k_rk_r'^⟨ rr'⟩
|s_r⟩
In the TPS canonical form, the diagonal values of
λ_i_ri_r'^⟨ rr'⟩ determine the state's Schmidt
decomposition at the corresponding bond. Therefore, the entanglement
entropy for partitioning the system at that bond is simply given by
S_E=-∑_iλ_ii^2logλ_ii^2. Assuming the ground
sate is periodic, the Hilbert space can be greatly reduced by
considering only a subset of TPSs defined by a periodic arrangement of
a small number of independent tensors. In our calculation we consider
an eight-site periodicity which is
compatible with the simplest magnetically ordered states expected in
this system. Under such an assumption, there are eight independent
T site-tensors and twelve independent λ bond-matrices.
The ground state is obtained by projection, iteratively evolving the
TPS in imaginary time until convergence is reached. Technically, the
time evolution operator U=e^-Δτ H, where H is the
Hamiltonian, and Δτ is a small time step, is applied using a
fourth-order Suzuki-Trotter
decomposition<cit.>, separately evolving
each (independent) bond at a time. Each time the evolution operator is
applied, the bond dimension (or Schmidt rank) χ generally
increases, making the algorithm unmanageable. To avoid this, a fixed
χ is maintained by applying a truncated singular value
decomposition (SVD) to the evolved TPS. Thus, larger values of χ
yield a more accurate ground state. Additional pure SVD steps,
applied between time evolution operations, are needed in order to
bring the TPS closer to the canonical form<cit.>.
We begin by iteratively evolving a random TPS with a time step of
Δτ=0.1 until the entanglement entropies S_E on all bonds
have converged. We then reduce Δτ by a half and continue
evolving, again until convergence is achieved. This process is
repeated until Δτ<10^-5. Since this method is defined
directly in the thermodynamic limit, there are no finite size effects,
and one may arrive at ground states which spontaneously break
underlying symmetries.
Generally, TPSs are well suited to describe gapped phases, but not
gapless ones. In the Kitaev limit, it is possible to have a gapped
phase by introducing bond anisotropy<cit.>. When
applying the iTEBD method to Eq. (1), we similarly found that
the calculation was unstable in the isotropic limit, and a small bond
anisotropy was required to stabilize it.
By calculating the zero energy density of states for Majorana fermions hopping on the anisotropic infinite tree, Kimchi et al.<cit.> were able to determine that in the K limit there is a gapped phase for a_K≳0.05.
12
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Gohlke2(2017)]Gohlke22017
author author M. Gohlke, author G. Wachtel, author Y. Yamaji, author F. Pollmann, author Y. B. Kim,
https://arxiv.org/abs/1706.09908 journal journal arXiv preprint arXiv:1706.09908 (year
2017)NoStop
[Albuquerque et al.(2010)Albuquerque, Alet, Sire, and Capponi]Albuquerque2010
author author A. F. Albuquerque, author F. Alet,
author C. Sire, and author S. Capponi, @noop
journal journal Phys. Rev. B volume 81, pages 064418 (year
2010)NoStop
[Reimann(2007)]PhysRevLett.99.160404
author author P. Reimann, @noop journal journal Phys.
Rev. Lett. volume 99, pages 160404
(year 2007)NoStop
[Imada and Takahashi(1986)]JPSJ.55.3354
author author M. Imada and author M. Takahashi, 10.1143/JPSJ.55.3354 journal
journal J. Phys. Soc. Jpn. volume
55, pages 3354 (year 1986)NoStop
[Jaklic and Prelovsek(1994)]PhysRevB.49.5065
author author J. Jaklic and author P. Prelovsek, @noop journal journal
Phys. Rev. B volume 49, pages 5065
(year 1994)NoStop
[Hams and De Raedt(2000)]PhysRevE.62.4365
author author A. Hams and author H. De Raedt, @noop journal journal
Phys. Rev. E volume 62, pages 4365
(year 2000)NoStop
[Sugiura and Shimizu(2012)]SSugiura2012
author author S. Sugiura and author A. Shimizu, @noop journal journal Phys.
Rev. Lett. volume 108, pages 240401
(year 2012)NoStop
[Sugiura and Shimizu(2013)]SSugiura2013
author author S. Sugiura and author A. Shimizu, @noop journal journal Phys.
Rev. Lett. volume 111, pages 010401
(year 2013)NoStop
[Vidal(2007)]vidal_classical_2007
author author G. Vidal, 10.1103/PhysRevLett.98.070201 journal journal Phys. Rev. Lett. volume 98, pages 070201 (year
2007)NoStop
[Sornborger and Stewart(1998)]sornborger_higher-order_1998
author author A. T. Sornborger and author E. D. Stewart, http://arxiv.org/abs/quant-ph/9809009 journal journal arXiv:quant-ph/9809009 (year
1998), note arXiv: quant-ph/9809009NoStop
[Orús and Vidal(2008)]orus_infinite_2008
author author R. Orús and author G. Vidal, 10.1103/PhysRevB.78.155117 journal journal Phys. Rev. B volume 78, pages 155117 (year 2008)NoStop
[Kitaev(2006)]kitaev_anyons_2006
author author A. Kitaev, 10.1016/j.aop.2005.10.005 journal
journal Annals of Physics series January
Special Issue, volume 321, pages 2
(year 2006)NoStop
[Kimchi et al.(2014)Kimchi,
Analytis, and Vishwanath]kimchi_three-dimensional_2014
author author I. Kimchi, author J. G. Analytis, and author A. Vishwanath, 10.1103/PhysRevB.90.205126 journal journal Phys. Rev. B volume
90, pages 205126 (year 2014)NoStop
|
http://arxiv.org/abs/1701.07673v1 | 20170126123928 | Using a new analysis method to extract excited states in the scalar meson sector | [
"Jacob Finkenrath",
"Constantia Alexandrou",
"Joshua Berlin",
"Mattia Dalla Brida",
"Theodoros Leontiou",
"Marc Wagner"
] | hep-lat | [
"hep-lat"
] |
§ INTRODUCTION
In this work we explore AMIAS, a rather new analysis method to extract energy differences and
amplitudes from lattice QCD correlators and correlation matrices. The basic idea is to consider
a very large number of fits to these correlators using Monte Carlo methods, which will lead to
probability distributions for the fit parameters, that are the energy differences and amplitudes.
This avoids the necessity of identifying plateau regions for effective energies. In particular, for unstable
systems and resonances identifying plateaus can be very challenging, since the signal-to-noise ratio
is typically rather poor for large temporal separations. Moreover it overcomes the limitation of extracting at most
N energy eigenstates, when N interpolating fields are used (as it is e.g. the case, when doing a standard GEVP analysis)
We apply AMIAS to analyze correlators computed for studying the a_0(980) meson, which has quantum
numbers I(J^P) = 1(0^+) and mass m_a_0(980)≈ 980 MeV <cit.>. We
use interpolating fields with both two-quark and a four-quark structures formed by an up-quark u and
an anti-down quark d̅ and except for one case a strange and an anti-strange quark s and s̅. The four-quarks
are arranged as a meson-meson interpolating field or in a diquark-anti-diquark combination,
which can probe different possible tetraquark structures of a_0(980). We consider six interpolating fields,
𝒪^1 𝒪^q q̅ = 1/√(V_s)∑_x(d̅( x) u( x))
𝒪^2 𝒪^K K̅, point = 1/√(V_s)∑_x(s̅( x) γ_5 u( x)) (d̅( x) γ_5 s( x))
𝒪^3 𝒪^η_sπ, point = 1/√(V_s)∑_x(s̅( x) γ_5 s( x)) (d̅( x) γ_5 u( x))
𝒪^4 𝒪^Q Q̅ = 1/√(V_s)∑_xϵ_a b c(s̅_b( x) (C γ_5) d̅_c^T( x)) ϵ_a d e(u_d^T( x) (C γ_5) s_e( x))
𝒪^5 𝒪^K K̅, 2part = 1/V_s∑_ x, y(s̅( x) γ_5 u( x)) (d̅( y) γ_5 s( y))
𝒪^6 𝒪^η_sπ, 2part = 1/V_s∑_ x, y(s̅( x) γ_5 s( x)) (d̅( y) γ_5 u( y)) ,
where V_s denotes the spatial lattice volume and C the charge conjugation matrix. All of them couple to a_0(980)
and other states with the same quantum numbers. For example, the interpolating fields 𝒪_5 and 𝒪_6
mainly generate the two-meson states K + K and π + η, respectively, which
are expected to have masses close that of the a_0(980). Note that the interpolating fields 𝒪_2
and 𝒪_3 represent two mesons located at the same point in space with only total momentum zero, but
arbitrary relative momenta involved (a structure resembling a 4-quark bound state). In contrast to that
𝒪_5 and 𝒪_6 correspond to two mesons with both total and relative momentum zero.
We compute the full six by six correlation matrix including both connected and disuconnected contributions,
which we neglect in our previous studies <cit.>.
Moreover here we have increased statistical accuracy of the correlators
C_j k(t) = ⟨𝒪^j(t) 𝒪^k †(0) ⟩,
included propagation of strange quarks within a timeslice and analyzed the correlators with AMIAS.
We use an ensemble of around 500 gauge link configurations generated with 2+1 dynamical Wilson clover
quarks and Iwasaki gauge action generated by the PACS-CS Collaboration <cit.>. The lattice size
is 64 × 32^3 with lattice spacing a ≈ 0.09 fm and pion mass m_π≈ 300 MeV.
§ CORRELATORS ON A PERIODIC LATTICE
A correlator on a periodic lattice with time extent T can be expanded according to
C_j k(t) = ⟨𝒪^j(t) 𝒪^k †(0) ⟩ = ∑_m,nexp{-ℰ_m (T-t)}⟨ m | 𝒪^j | n ⟩exp{-ℰ_n t}⟨ n | 𝒪^k † | m ⟩/∑_m exp{-ℰ_m T}
with energy eigenstates | m ⟩ and corresponding energy eigenvalues
ℰ_0 ≤ℰ_1 ≤ℰ_2 … (| 0 ⟩ = | Ω⟩ denotes the vaccum).
This correlator can also be expressed in a more convenient form,
C_j k(t) = ⟨𝒪^j(t) 𝒪^k †(0) ⟩ = ∑_m,n c^j_m,n (c^k_m,n)^∗exp{-(ℰ_m + ℰ_n) (T/2)}cosh{Δℰ_n,m (t - T/2)}/∑_m exp{-ℰ_m T}
with c_m,n^j = ⟨ m | 𝒪^j | n ⟩ and Δℰ_n,m = ℰ_n - ℰ_m. For
example, if j = k, if 𝒪^j probes the sector, which contains energy eigenstate | 1 ⟩, if the quantum
numbers of | 1 ⟩ are different from those of the vacuum | Ω⟩, and if the correlator is not contaminated
by multi-particle states (see discussion below), (<ref>) reduces to
C_j j(t) = ⟨𝒪^j(t) 𝒪^k †(0) ⟩≈ 2 |c^j_0,1|^2 exp{-Δℰ_1,Ω (T/2)}cosh{Δℰ_1,Ω (t - T/2)}
for sufficiently large t and T. A standard technique to determine the energy difference Δℰ_1,Ω = E_1 - E_Ω,
the mass of state | 1 ⟩, from the asymptotic t behavior of C_j j(t) (Eq. (<ref>)) is to fit
C_j j(t) = A cosh{-Δℰ_1,Ω (t - T/2)}
to the lattice QCD results for C_j j(t) with fitting parameters Δℰ_1,Ω and A. Alternatively, one can also solve the equation
C_j j(t)/C_j j(t-a) = cosh{E_eff(t) (t - T/2)}/cosh{E_eff(t) (t-a - T/2)}
with respect to E_eff(t), where E_eff(t) ≈Δℰ_1,Ω. In other words, a plateau-like
behavior of E_eff(t) indicates the mass ℰ_1,Ω. In practice, however, the temporal
extent T of the lattice is limited and the effective energy E_eff(t) is often very noisy for large t,
rendering a reliable extraction of Δℰ_1,Ω difficult.
When multi-particle states are present in the investigated sector, the determination of low-lying masses is even more difficult.
We sketch this for a simple non-interacting two-meson system, which is generated by an interpolating field 𝒪 = 𝒪^(1)⊗𝒪^(2). Energy eigenstates of such a system can be written as | n ⟩ = | n_1 ⟩^(1)⊗ | n_2 ⟩^(2), where | n_1 ⟩^(1) and | n_2 ⟩^(2) (n_1 , n_2 = 0,1,2,…) are eigenstates of the two Hamiltonians describing the individual mesons. Energy eigenvalues of this two-meson system are ℰ_n_1,n_2^(1+2) = ℰ_n_1^(1) + ℰ_n_2^(2), in particular the lowest energy eigenvalue is ℰ_0,0^(1+2) = ℰ_0^(1) + ℰ_0^(2). Note that in the expansion (<ref>) there is a term, which is proportional to cosh{(ℰ_0^(1) - ℰ_0^(2)) (t - T/2)}, i.e. only decaying with the mass difference of the two mesons. In the region of t ≈ T/2 this unwanted term can be more dominant than the signal term, which is proportional to cosh{ℰ_0,0^(1+2) - ℰ_Ω) (t - T/2)}. Thus, one has to be rather careful, when analyzing correlators of sectors containing multi-particle states. This is illustrated by Fig. <ref>,
where we show the correlator using the interpolating field 𝒪^q q̅ (Eq. (<ref>)). Clearly,
there are two effective mass plateaus, a E_eff(t) ≈ 0.60 for t/a 7, corresponding to
the mass of a π + η or a K + K two-meson state, and a E_eff(t) ≈ 0.25 for t/a 8,
which does not correspond to the mass of any state, but to the mass difference m_η - m_π of two single-meson states.
§ AMIAS
Lattice QCD results for the correlators C_j k(t) = ⟨𝒪^j(t) 𝒪^k †(0) ⟩
with 𝒪^j, j = 1,…,6 defined in Eqs. (<ref>) to (<ref>) can be parameterized and fitted
the expression given in Eq. (<ref>). Since statistical accuracy is limited, it is sufficient to consider a rather small number
of energy eigenstates, i.e. ∑_m,n , ∑_m →∑_m,n = 0^N-1 , ∑_m = 0^N-1 with N 10.
With AMIAS <cit.> one can
determine probability distribution functions (PDFs) for the fit parameters Δℰ_n,m (energy differences)
and c^j_m,n (amplitudes). AMIAS is able to deal with a rather large number of parameters by using Monte Carlo
techniques. In contrast to e.g. the GEVP, it is not necessary to identify plateau regions or to specify temporal
fitting ranges.
The PDF for the complete set of fit parameters is given by
P(Δℰ_n,m , c^j_m,n) = 1/Z e^-χ^2/2
with appropriate normalization Z and
χ^2 = ∑_j,k∑_t/a=1^(T-1)/a(C_j k^lattice(t)- C_j k(t))^2/(w_j k(t))^2 ,
which is the well-known χ^2 used in χ^2 minimizing fits (C_j k^lattice(t) denote lattice
QCD results for correlators with corresponding statistical errors w_j k(t)). To obtain the PDF for a
specific fit parameter, one has to integrate in (<ref>) over all other parameters. In particular, the
probabilty for parameter 𝒜_j to be inside [a,b] (𝒜_j represents either one of the energy
eigenvalue differences Δℰ_n,m or amplitudes c^j_m,n) is
Π(𝒜_j ∈ [a,b]) = ∫_a^b d𝒜_j ∫_-∞^+∞∏_k j d𝒜_k
e^-χ^2/2/∫_-∞^+∞∏_k d𝒜_k
e^-χ^2/2 .
This multi-dimensional integral can be computed with standard Monte Carlo methods. We implemented
a parallel-tempering scheme combined with the Metropolis algorithm as described in detail in
Ref. <cit.>. The parallel-tempering scheme prevents that the algorithm is stuck in a
region around a local minimum of χ^2 and, thus, guarantees ergodicity of the algorithm.
We have again analyzed the correlator of 𝒪^q q̅ shown in Fig. <ref> (left), this time using AMIAS with fit function
C(t) = ∑_n=1^2 A_n cosh{Δℰ_n (t - T/2)} .
The resulting PDFs for the four parameters Δℰ_1, Δℰ_2, A_1 and A_2 are shown in Fig. <ref>.
As in Fig. <ref> (right) two energy differences can clearly be identified, a Δℰ_1 ≈ 0.25
(the mass difference m_η - m_π) and a Δℰ_2 ≈ 0.60 (the mass of a π + η or K + K two-meson state).
§ ANALYSIS OF THE 6 × 6 CORRELATION MATRIX
We now use AMIAS to analyze the 6 × 6 correlation matrix with interpolating fields (<ref>) to (<ref>).
The fit function is obtained by restricting (<ref>) to a finite number of terms,
C_j k(t) = ∑_n=1^N c^j_n (c^k_n)^∗cosh{Δℰ_n (t - T/2)} .
In a first analysis we use lattice QCD data, where the propagation of strange quarks within the same
timeslice has been neglected. The interpolating field
𝒪^q q̅ then decouples and, hence, cannot be considered in a tetraquark study
of a_0(980) (cf. also Refs. <cit.>, where the same data has been used).
Moreover, we do not consider the diquark-antidiquark interpolating field 𝒪^Q Q̅. In
Fig. <ref> (top) we show the four lowest masses obtained with N = 8 terms in
(<ref>)[AMIAS is able to determine the optimal N automatically. For details we refer to <cit.>.],
corresponding to the expected two-particle states π + η and K + K with both mesons at rest
(Δℰ_2 and Δℰ_3) as well as with one quantum of relative momentum (Δℰ_4 and Δℰ_5).
Note that AMIAS also finds the energy eigenvalue difference m_η - m_π (Δℰ_1, not shown in Fig. <ref>),
which has already been discussed in previous sections.
Since (c^j_n)^∗ = ⟨ n | 𝒪^j † | Ω⟩, the amplitudes extracted with AMIAS are the coefficients of
the expansions of the trial states 𝒪^j † | Ω⟩ in terms of the extracted energy eigenstates | n ⟩, i.e.
𝒪^j † | Ω⟩≈∑_n=1^N (c^j_n)^∗ | n ⟩ .
More interesting, however, are the opposite expansions, i.e. the extracted energy eigenstates in terms of the trial states,
| n ⟩≈∑_j=2,3,5,6 v^j_n 𝒪^j † | Ω⟩ .
It is easy to show that the matrix formed by the coefficients v^i_n is the inverse of the matrix formed by the coefficients (c^j_n)^∗ (j = 2,3,5,6, n = 2,…,5) up to
exponentially small corrections, i.e. ∑_j v^j_m (c^j_n)^∗ = δ_m,n. Alternatively,
one can also use the AMIAS samples for Δℰ_n and c^j_n and Eq. (<ref>) to reconstruct a correlation matrix for each sample. Solving standard GEVPs for each reconstructed
correlation matrix yields eigenvectors with components identical to v^j_n. The coefficients |v^j_n| are shown in Fig. <ref> (bottom) (not the PDFs, but the most likely values).
Clearly, Δℰ_2 and Δℰ_3 correspond to two-particle states π + η and K + K with
both mesons at rest, since the coefficients v^j_n show almost exclusively overlap to the trial states
𝒪^η_sπ, 2part† | Ω⟩ and 𝒪^K K̅, 2part† | Ω⟩. Δℰ_4
and Δℰ_5 are close to the expected two-particle states with one quantum of relative momentum and the overlaps
shown by v^j_n are consistent with this interpretation. There is no indication for any additional state in the region of 1000 MeV,
which could correspond to a_0(980). These findings are consistent with our previous study using ETMC gauge link configurations <cit.>.
When including the propagation of strange quarks within the same timeslice, the interpolating field 𝒪^qq̅ couples
to the four-quark interpolating fields given in Eqs. (<ref>) to (<ref>). This introduces, however, a lot of statistical noise and
it is rather difficult to resolve energy differences precisely. In Fig. <ref> we show preliminary AMIAS results for
the full 6 × 6 correlation matrix, where N = 12 terms in Eq. (<ref>) have been used. The two lowest energy eigenstates exhibit
rather clear signals and correspond to the π + η and K + K two-meson states. Even though higher excitations are less prominent,
we find strong indication for an additional state with mass Δℰ_3 ≈ 0.6 / a ≈ 1300 MeV, which
is below the expectation for the two-particle states with one quantum of relative momentum and, hence, might be a candidate for the a_0(980).
§ CONCLUSIONS
With AMIAS one can extract energy differences and amplitudes without the necessity to identify pleateau regions or to specify temporal
fitting ranges. We have shown that AMIAS can successfully be used to analyze correlators with strong contributions from several energy
eigenstates as well as rather noisy correlation matrices. An example for the latter is e.g. the full 6 × 6 correlation matrix
for the a_0(980) channel. Another major advantage of AMAIAS might be, that it does not require all elements of a correlation matrix,
i.e. particularly noisy elements or elements, which are very time-consuming to compute, can be omitted. This we plan to address in a future publication.
Acknowledgements:
M.W. acknowledges support by the Emmy Noether Programme of the DFG (German Research Foundation),
grant WA 3000/1-1. This work was supported in part by the Helmholtz International Center for FAIR within the framework
of the LOEWE program launched by the State of Hesse.
This work was cofunded by the European Regional Development Fund and the Republic of Cyprus through the Research
Promotion Foundation (Project Cy-Tera NEA YΠOΔOMH/ΣTPATH/0308/31) by the grand cypro914.
99
Agashe:2014kda
K. A. Olive et al. [PDG Collaboration],
Chin. Phys. C 38, 090001 (2014).
Abdel-Rehim:2014zwa
J. Berlin et al.,
PoS LATTICE 2014, 104 (2014)
[arXiv:1410.8757 [hep-lat]].
Berlin:2015faa
J. Berlin et al.,
PoS LATTICE 2015, 096 (2015)
[arXiv:1508.04685 [hep-lat]].
Berlin:2016zci
J. Berlin et al.,
PoS LATTICE 2016, 128 (2016)
[arXiv:1611.07762 [hep-lat]].
Aoki:2008sm
S. Aoki et al. [PACS-CS Collaboration],
Phys. Rev. D 79, 034503 (2009)
[arXiv:0807.1661 [hep-lat]].
Alexandrou:2008bp
C. Alexandrou, C. N. Papanicolas and E. Stiliaris,
PoS LATTICE 2008, 099 (2008)
[arXiv:0810.3982 [hep-lat]].
Papanicolas:2012sb
C. N. Papanicolas and E. Stiliaris,
arXiv:1205.6505 [hep-ph].
Alexandrou:2014mka
C. Alexandrou, T. Leontiou, C. N. Papanicolas and E. Stiliaris,
Phys. Rev. D 91, 014506 (2015)
[arXiv:1411.6765 [hep-lat]].
Alexandrou:2012rm
C. Alexandrou et al.,
JHEP 1304, 137 (2013)
[arXiv:1212.1418 [hep-lat]].
|
http://arxiv.org/abs/1701.08146v1 | 20170126182502 | Surface tension of compressed, superheavy atoms | [
"Jorge A. Rueda",
"Yuan-Bin Wu",
"She-Sheng Xue"
] | nucl-th | [
"nucl-th",
"astro-ph.HE",
"astro-ph.SR"
] |
]Surface tension of compressed, superheavy atoms
Dipartimento di Fisica and ICRA, Sapienza Università
di Roma, Piazzale Aldo Moro 5, I-00185 Rome, Italy
ICRANet, Piazza della Repubblica 10, I-65122 Pescara,
Italy
Based on the relativistic mean field theory and the Thomas-Fermi approximation,
we study the surface properties of compressed, superheavy atoms.
By compressed, superheavy atom we mean an atom composed by
a superheavy nuclear core (superheavy nucleus) with mass number
of the order of 10^4, and degenerate electrons that neutralize the system.
Some electrons penetrate into the superheavy nuclear core and the rest surround it
up to a distance that depends upon the compression level. Taking into account the
strong, weak, and electromagnetic interactions, we numerically study
the structure of compressed, superheavy atoms and calculate the
nuclear surface tension and Coulomb energy. We analyze the influence
of the electron component and the background matter on the nuclear
surface tension and Coulomb energy of compressed, superheavy atoms.
We also compare and contrast these results in the case of
compressed, superheavy atoms with phenomenological results in
nuclear physics and the results of the core-crust interface of
neutron stars with global charge neutrality. Based on the numerical
results we study the instability against Bohr-Wheeler surface
deformations in the case of compressed, superheavy atoms. The
results in this article show the possibility of the existence of such
compressed, superheavy atoms, and provide the evidence of strong effects of
the electromagnetic interaction and electrons on the structure of
compressed, superheavy atoms.
21.10.-k, 05.30.Fk, 26.60.-c
§ INTRODUCTION
It has been shown recently that the Einstein-Maxwell-Thomas-Fermi
(EMTF) equations <cit.> supersede the traditional
Tolman-Oppenheimer-Volkoff (TOV) <cit.>
equations used for the construction of neutron star equilibrium
configurations, when taking into account the strong, weak,
electromagnetic, and gravitational interactions. In contrast to the
imposing of the condition of local charge neutrality in the
traditional TOV approach, the condition of global charge neutrality
is applied in the EMTF approach, owing to the fact that the
traditional treatment imposing the condition of local charge
neutrality is not consistent with the field equations and
microphysical conditions of equilibrium for the system of neutrons,
protons, and electrons in β equilibrium and obeying
relativistic quantum statistics <cit.>.
In order to describe the strong interactions between nucleons, the
σ-ω-ρ nuclear model of relativistic mean field
theory (RMFT) <cit.> is adopted in the EMTF
approach. This model contains Dirac nucleons together with a scalar
meson σ and a vector meson ω as well as an isovector
meson ρ. The RMFT has gained great successes in giving a
quantitative description of nuclear properties
<cit.> and understanding the inhomogeneous
structures of low-density nuclear matter which can be realized in
supernovae cores or in neutron star crusts (see
e.g. Refs. <cit.> about the nuclear pasta structures).
As shown in Ref. <cit.>, the self-consistent solution of
the EMTF equations leads to a new structure of neutron stars, which
is significantly different from the neutron star structure obtained
from the TOV equations imposing local charge neutrality
<cit.>. In this new structure of neutron stars, a
transition layer (interface) appears between the core and the crust
of the star, near the nuclear saturation density. There is a
discontinuity in the density at the core-crust transition in this
new structure of neutron stars. The core (bulk region) inside this
transition layer is a hadronic phase and the crust outside this
transition layer is composed of a nuclei lattice and relativistic
degenerate electrons and possibly neutrons at densities below the
nuclear saturation density and higher than the estimated
neutron-drip value ∼ 4.3× 10^11 g cm^-3 <cit.>. Inside the transition region, a very strong electric field
overwhelming the critical field E_c=m^2_e c^3/(e ħ) for vacuum
breakdown appears <cit.>, where m_e is the electron
mass.
The surface properties of nuclear matter such as the surface tension
and the curvature energy play an important role in many situations
and phenomena such as the stability of nuclei, fragment
distributions in heavy-ion collisions, and phase transitions between
different phases of nuclear matter. The surface properties of
nuclear matter have been analyzed a lot in the past few decays for
the matter at the nuclear saturation density
<cit.>,
as well as the matter at the supranuclear regime realized in the
interior of neutron stars <cit.> for the phase
transition region and the pasta structures of the low-density
nuclear matter <cit.>.
The surface properties of the core-crust interface of the new
neutron star structure obtained from the solution of the EMTF
equations has been studied in Ref. <cit.> (see also
Ref. <cit.> for a brief description). We calculated in
Ref. <cit.> the surface tension as well as the
electrostatic energy stored in this core-crust transition layer. We
analyzed the stability of these systems through the Bohr-Wheeler
fission mechanism <cit.>. It was shown in Ref. <cit.>
that the electromagnetic interaction and the presence of degenerate
electrons have evident effects on the surface properties of the
core-crust interface. In the analyses of
Refs. <cit.>, we employed the condition that
the electron density is approximately equal to the proton density in
the core bulk region. Here we consider a more general case that
the electron density is smaller than the proton density in core bulk
region. Actually, this is the case of
compressed, superheavy atoms in which some of the electrons have
penetrated into superheavy nuclear cores (superheavy nuclei)
<cit.> (we call them compressed, superheavy
atoms according to Ref. <cit.> in which a similar object was studied).
A compressed, superheavy atom is an atom composed by
a superheavy nuclear core (superheavy nucleus), and degenerate electrons that neutralize the system.
Some electrons penetrate into the superheavy nuclear core and the rest surround it
up to a distance that depends upon the compression level.
Such kind of compressed, superheavy atoms are hypothetical objects
and could be possible to appear in the high density region
of the neutron star crust or in other systems for example in the
r-processes in gamma-ray bursts; studies of such kind of objects could
provide a better understanding in the field of nuclear physics and nuclear astrophysics. In this
article, we study the surface properties of these compressed, superheavy atoms.
The article is organized as follows. In Sec. <ref>, we
formulate the relativistic equations of motion for the system of
neutrons, protons and electrons fulfilling the strong and
electromagnetic interactions and β equilibrium, and the
equations for governing the nuclear surface tension and Coulomb
energy of compressed, superheavy atoms. In Sec. <ref>, we
present our discussions on the basis of the numerical analysis of
the structure, the nuclear surface tension, and the Coulomb energy
of compressed, superheavy atoms. We also apply the Bohr-Wheeler
fission mechanism <cit.> to analyze the stability of
compressed, superheavy atoms, in Sec. <ref>. We finally
give a summary in Sec. <ref>. We use units with ħ = c =
1 throughout the article.
§ EQUATIONS OF MOTION AND SURFACE TENSION
The system of compressed, superheavy atoms under consideration is
composed of degenerate neutrons, protons, and electrons including
the strong, electromagnetic, and weak interactions and fulfilling
global charge neutrality. In this system, the electron density in
the inside bulk region (n_eb) smaller than the proton one
(n_bp), i.e., n_eb<n_bp. We adopt the
σ-ω-ρ phenomenological nuclear model of Boguta
and Bodmer <cit.> to describe the strong interactions
between the nucleons. The Lagrangian density of the model we
considered here is given by
ℒ = ℒ_f +
ℒ_σ + ℒ_ω + ℒ_ρ +
ℒ_γ + ℒ_int,
including the free-field Lagrangian densities
ℒ_γ, ℒ_σ,
ℒ_ω, and ℒ_ρ, respectively for
the electromagnetic and the three mesonic fields, the three fermion
species (electrons, protons and neutrons) Lagrangian density
ℒ_f and the interacting part ℒ_int. A
detailed description of this model can be found in
Ref. <cit.>.
We adopt the compressed, superheavy atom as a spherical droplet,
so we have spherical symmetry in this system. Within the mean-field
approximation and Thomas-Fermi approximation, the equations of
motion for this system are given by
d^2 V/dr^2 + 2/rdV/dr
= -4π e (n_p - n_e),
d^2 σ/dr^2 + 2/rd σ/dr =
[∂_σ U(σ) + g_s n_s],
d^2 ω/dr^2 + 2/rd ω/dr =
-(g_ω J_0^ω - m_ω^2 ω),
d^2 ρ/dr^2 + 2/rd ρ/dr =
-(g_ρ J_0^ρ - m_ρ^2 ρ),
E_e^F = μ_e - e V = constant,
E_p^F = μ_p + g_ωω + g_ρρ
+ e V = constant,
E_n^F = μ_n + g_ωω - g_ρρ =
constant.
This is a special case of the EMTF system of equations <cit.> without the presence of the gravitational interaction.
Here we have introduced the notation ω_0 ≡ω,
ρ_0 ≡ρ, and A_0 ≡ V for the time components of
the meson fields, where A is the electromagnetic field. μ_i =
√((P_i^F)^2 + m̃_i^2) and n_i = (P_i^F)^3/(3π^2) are
the free chemical potential and the number density of the
i-fermion species (i=n,p,e) with Fermi momentum P_i^F. The
particle effective masses are m̃_N = m_N + g_s σ and
m̃_e = m_e, where m_i is the rest mass of each
i-fermion species. g_s, g_ω, and g_ρ are the
coupling constants of the σ, ω and ρ fields, and
e is the fundamental electric charge. m_σ, m_ω,
and m_ρ are the masses of σ, ω, and ρ.
U(σ) is the scalar self-interaction potential which can be
found in e.g. Refs. <cit.>.
The generalized Fermi energies of electrons, protons, and neutrons,
E_e^F, E_p^F, and E_n^F, derived from the thermodynamic
equilibrium conditions given by the statistical physics of
multicomponent systems, are linked by the β-equilibrium
<cit.> of protons, neutrons, and electrons,
E_n^F = E_p^F + E_e^F.
The scalar density n_s is given by the expectation value
n_s = 2/(2π)^3∑_i=n,p∫_0^P_i^F d^3 k m̃_N/ϵ_i^k(k),
where ϵ_i^k(k) = √(k^2 + m̃^2_i) is the single
particle energy. In the static case, the nonvanishing components of
the currents are
J_0^ch = (n_p - n_e),
J_0^ω = n_b = (n_n + n_p),
J_0^ρ = (n_p -n_n),
here n_b = n_p + n_n is the baryon number density.
We would like to mention here that the Thomas-Fermi
approximation and the Thomas-Fermi approximation combined with the
RMFT applied to nuclei are well-known and have gained great
successes in understanding nuclear structures (see, e.g.,
Refs. <cit.>). In the present
study, we apply this approach of the Thomas-Fermi approximation
combined with RMFT to compressed, superheavy atoms,
inspired by our new neutron star model studied in Refs. <cit.>.
One of our major purposes here is to analyze the possibility of the existence
of such “exotic" neutron rich nuclei whose mass numbers are much larger
than that of ordinary nuclei. Another major purpose here is to study the
effects of the electrons and electromagnetic interaction on the surface
properties of such a system. The study presented in this article would give us a further
understanding of the influence of the electromagnetic interaction
and electrons on the surface properties of the core-crust interface
of our new structure of neutron stars <cit.>,
hence give us a further understanding of global charge neutrality
and the structure of neutron stars.
The parameters of the nuclear model, namely the coupling constants
g_s , g_ω, and g_ρ, the meson masses
m_σ, m_ω, and m_ρ, and the third- and
fourth- order constants of the self-scalar interactions g_2 and
g_3 are fixed by fitting nuclear experimental data, such as
saturation density, binding energy per nucleon, symmetry energy,
surface energy, and nuclear incompressibility. We here use the
parameters of the NL3 parametrization <cit.> as the one
used in Refs <cit.>, shown in Table
<ref>.
Now we turn to the analyze of the surface tension of this system. We
construct the surface tension following a similar method in
Ref. <cit.>. Since we treat the compressed, superheavy
atom as a spherical droplet, we assume a spherical surface (the size
of the system under consideration is larger than the one of ordinary
nuclei, so the nuclear curvature energy here is small compared to
the nuclear surface energy) with a small thickness separating one
finite region (inside the nuclear core region) and one semi-infinite
region (outside background region, similar to the outside crust
region in the discussion of Ref. <cit.>). The number
density of the i-fermion species n_i(r) approaches the density
of the i-fermion species n_ib in the origin (the inside
region) as the position r→ 0, and approaches the density
in the outside region of the i-fermion species n_io as the
r→ +∞. To construct the surface tension, as in the
case of the semi-infinite matter model, we imagine a reference
system with sharp surfaces at radii r_i (i =
n,p,e,σ,ω,ρ) at which fermion densities and meson
fields fall discontinuously from the bulk region to the outside
region. Following a similar method of Baym-Bethe-Pethick (BBP)
<cit.>, the location of the reference surface for the
i-fermion species is defined by the condition that the reference
system has the same number of i-fermion species as the original
system,
4π∫_0^r_i r^2 d r [n_i(r) - n_ib] + 4π∫_r_i^∞ r^2 d r [n_i(r) - n_io]
= 0, i = n,p,e.
Similar to the definition of reference surfaces for fermions, the
location of the reference surfaces for meson fields are defined by
4π∫_0^r_i r^2 d r [F_i(r) - F_ib] + 4π∫_r_i^∞ r^2 d r [F_i(r) - F_io]
= 0, i = σ,ω,ρ,
where F_i(r) is the time component of the i-meson field,
F_ib is the time component of the i-meson field in the inside
region, and F_io is the time component of the i-meson field in
the outside region.
Similar to the way of BBP <cit.>, the nuclear surface energy
can be computed as the total energy subtracting off the bulk energy,
E_sur = ∑_i=n,p,σ,ω,ρ 4 π{∫_0^r_i
r^2 [ϵ_i(r) - ϵ_ib] dr
+ ∫_r_i^∞ r^2 [ϵ_i(r) - ϵ_io] dr
},
and the Coulomb energy is
E_coul = 4π∫_0^∞ r^2 ϵ_E(r) dr,
where ϵ_i (r) is the energy density of the i species of
fermion or meson fields, ϵ_ib is the energy density of
the i species of fermion or meson fields in the center of the
system (the inside region), ϵ_io is the energy density of
the i species of fermion or meson field in the outside region, and
ϵ_E (r) is the energy density of the electric field.
Similar to the energy densities given in Ref. <cit.>, the
energy density of the i-fermion species ϵ_i (r) is
ϵ_i (r)
= 1/8π^2{ P_i^F √((P_i^F)^2 + m̃_i^2)[2(P_i^F)^2 +
m̃_i^2]
- m̃^4 lnP_i^F + √((P_i^F)^2 + m̃_i^2)/m̃_i},
and the energy densities of the meson fields in this spherical
system are
ϵ_σ(r) = 1/2( dσ/dr)^2 + U(σ),
ϵ_ω(r) = 1/2( dω/dr)^2 + 1/2 m_ω^2 ω^2,
ϵ_ρ(r) = 1/2( dρ/dr)^2 + 1/2 m_ρ^2 ρ^2,
ϵ_E(r) = 1/8π( dV/dr)^2.
The nuclear surface tension is given as the nuclear surface energy
per unit area,
σ_Ns = E_sur/4π r_n^2,
and similarly we obtain the Coulomb energy per unit area (the
surface tension for the electric field)
σ_Cs = E_coul/4π r_n^2,
where r_n is the reference radius of neutrons defined by
Eq. (<ref>). Since the neutron number is much larger than the
proton number in the system, so it is reasonable to set the radius
of neutrons to be the radius of the nucleus to estimate the surface
tensions; this is consistent with the existence of the neutrons halo
or neutron skin effect <cit.>.
The relation between the nuclear surface energy and the Coulomb
energy is very important for a nucleus. As shown by Bohr and Wheeler
<cit.> when the condition
E_coul > 2E_sur
is satisfied, the nucleus becomes unstable against nuclear fission.
A careful analysis on the derivation of this condition shows that
the Bohr-Wheeler condition given by Eq. (<ref>) applies also to
our system <cit.>.
§ NUMERICAL ANALYSIS
Following a similar procedure in Refs. <cit.>,
we can solve the equations (<ref>)-(<ref>) together with
the β-equilibrium (<ref>) to obtain the fermion-density
and meson-field profiles. This system of equations can be
numerically solved with appropriate boundary conditions and
approximations, as shown in Refs. <cit.>.
In order to obtain a solution of these equations, we set a value for
the baryon number density n_bb = n_nb+n_pb in the
region near the center, and we set a small electron density n_eb
= y_e n_pb in the region near the center with electron fraction
y_e<1. As described in Refs. <cit.>, the
fermion densities n_io in the outside region depend on the
density at the base of the background under consideration (similar
to the crust in the discussion of Ref. <cit.>). The
background matter is composed of a nuclei lattice in a background of
degenerate electrons, whose density is denoted here as
n_e^bg. In addition, there are free neutrons in the
background when the density ρ_bg of the background is
higher than the neutron-drip density ρ_drip≈ 4.3
× 10^11 g cm^-3 <cit.>. So when the density
ρ_bg of the background is smaller than the neutron-drip
density ρ_drip, i.e., ρ_bg <
ρ_drip, we set the proton density and the neutron density
to zero in the outside region while the electron density matches the
value n_e^bg of the density of background electrons,
i.e., n_eo = n_e^bg. When ρ_bg
> ρ_drip both the neutron density and the electron density have to match
their corresponding background densities, i.e., n_eo =
n_e^bg and n_no = n_n^bg, where
n_n^bg is the neutron density in the background. As shown
in Ref. <cit.> there is no proton drip in the systems under
consideration, so we keep the outside proton density value as zero.
In order to set the matching density values for electrons and
neutrons we use the relation between the free neutron density and
the electron density in Section 6 of Ref. <cit.>.
As shown in Refs. <cit.>, the
transition interface that we are interesting in appears near the
nuclear saturation density n_nucl = 0.16 fm^-3. In
order to study the compressed, superheavy atoms and
the influence of the electrons and electromagnetic
interaction on the surface properties of the system, we assume at
first the baryon number density in the region near the center to be
the nuclear saturation density (results presented in
Figs. <ref>-<ref>), i.e., n_bb = n_nucl
= 0.16 fm^-3. At the end of this section, we will also
study the influence of baryon number density (results presented in
Fig. <ref> and Table <ref>).
The results of the solutions of two examples are shown in
Fig. <ref> for the case P^F_eb = 0.95
P^F_pb and in Fig. <ref> for the case
P^F_eb = 0.8 P^F_pb, when the density in the
outside (background) region is the neutron-drip density
ρ_bg = ρ_drip≈ 4.3× 10^11 g
cm^-3. We have introduced the notations P^F_eb for
the Fermi momentum of electrons in the region near the center of the
system, and P^F_pb for the Fermi momentum of protons in
the region near the center of the system. It is also worth
mentioning here that the typical mass number of these compressed, superheavy
atoms is ∼ 10^4; e.g., A∼ 35000 and Z/A ∼
0.154 for the case shown in Fig. <ref>, and A∼ 12000 and
Z/A ∼ 0.189 for the case shown in Fig. <ref>, where A is
the total number of nucleons (mass number) and Z is the total number of protons.
The mass numbers of these compressed, superheavy atoms are
much larger than that of ordinary nuclei.
As shown in Fig. <ref>, when the difference between the
electron density and the proton density in the region near the
center of the system (n_pb-n_eb) is small, the fermion-density
and meson-field profiles are similar to their counterparts in the
case of semi-infinite matter (electron density nearly equal to the
proton density in the inside bulk region n_eb≃ n_pb).
Comparing to the results in the case of the electron density being
approximately equal to the proton density in the core bulk region
shown in Ref. <cit.>, the bump of the proton profile is
larger in this case, as expected from the fact that the internal
electric field is less screened than the case of n_eb≃
n_pb. We can also see from Figs. <ref>-<ref>, how the
fermion and meson-field profiles change for increasing charge
separations (n_pb-n_eb).
Using the definitions in Eqs. (<ref>) and (<ref>), we
obtain the surface tensions for compressed, superheavy atoms. The
dependence of the surface tension on the ratio of the electron Fermi
momentum and the proton Fermi momentum in the region near the center
of the system (P^F_eb/P^F_pb) is shown in
Fig. <ref> for the case of the fermion densities and meson
fields tending to be zero in the outside region, and
Fig. <ref> for the case of the density in the outside
(background) region is the neutron-drip density ρ_bg =
ρ_drip≈ 4.3× 10^11 g cm^-3. From the
results, the system is stable with respect to the Bohr-Wheeler
condition (<ref>) of the stability, in all ratios
P^F_eb/P^F_pb we consider. This is the result of
the penetration of the relativistic electrons into the nucleus (see
also Refs. <cit.>). This in principle implies the
possibility of the existence of such kind of compressed, superheavy atoms. As shown in
Fig. <ref>, the nuclear surface tension σ_Ns first
increases and then decreases when the difference between the
electron density and the proton density increases, and the nuclear
surface tension tends to the phenomenological result (∼ 1 MeV
fm^-2) without the presence of electrons in the inside bulk
region studied in the nuclear physics<cit.>. There are two
effects which influence on the nuclear surface tension
σ_Ns: (I) for n_eb<n_pb the bump of the proton
profile around the nuclear surface changes as shown in
Figs. <ref>–<ref>, and (II) the higher the difference
(n_pb-n_eb) is, the lower the nuclear asymmetry. As a
consequence, the total energy of the system decreases. The
combination of these two effects leads to the results of the nuclear
surface tension σ_Ns shown in Fig. <ref>.
Comparing the results of Fig. <ref> and Fig. <ref>, we
can find that the electrons in the outside region have strong
effects on the surface structure of compressed, superheavy atoms
considered here. The increase of the electron density in the outside
region effectively reduces the Coulomb energy per unit area
σ_Cs, as well as the nuclear surface tension
σ_Ns. This effect is enhanced when increasing
difference between the electron density and the proton density in
the region near the center of the system (n_pb-n_eb), as shown
in Figs. <ref>-<ref>. This effect is mainly due to the
reason that the electrons have a strong influence on the bump on the
profiles, leading to a strong effect on the surface structure and
the surface tensions σ_Ns and σ_Cs.
These results provide the evidence of strong effects of the
electromagnetic interaction and electrons on structure of the
system. This result of the effect due to the electrons in the
outside region as shown by the comparison of Fig. <ref> and
Fig. <ref> is different from the case studied in
Ref. <cit.> where the electron density in the inside bulk
region (n_eb) is nearly equal to the proton one (n_bp). In
the case shown in Ref. <cit.>, the effect of the electrons
in the outside region is small when the density ρ_bg in
the outside region is smaller than the neutron-drip density,
ρ_bg < ρ_drip.
We now turn to study the effect of the free neutrons in the
background (the outside region) on the surface properties of
compressed, superheavy atoms. The dependence of the surface
tension on the density ρ_bg of the background for the
case of P^F_eb = 0.8 P^F_pb is shown in
Fig. <ref>. As shown in Fig. <ref>(c), the
Bohr-Wheeler condition (<ref>) for the instability is reached at
a background density ρ_bg^crit∼ 9.7 ×
10^13 g cm^-3, so the system becomes unstable against fission
when ρ_bg>ρ_bg^crit. This imposes a
physical upper limit to the density of the background for
compressed, superheavy atoms with P^F_eb = 0.8
P^F_pb. This critical background density
ρ_bg^crit is smaller than the one for the case of
the electron density in the inside bulk region being nearly equal to
the proton one (n_eb≃ n_bp) discussed in
Ref. <cit.>. This implies that the difference between the
electron density and the proton density in the region near the
center of the system (n_pb-n_eb) can decrease the stability of
compressed, superheavy atoms.
The results in Fig. <ref> clearly show the strong effect of
the fermions in the outside (background) region on the surface
structure of compressed, superheavy atoms, as we have discussed
above in the comparison of Fig. <ref> and Fig. <ref>.
The Coulomb energy per unit area σ_Cs and the nuclear
surface tension σ_Ns change significantly as changing
the density ρ_bg of the background (the outside region),
in both cases: (I) the density ρ_bg of the background is
higher than the neutron-drip density ρ_drip; (II) the
density ρ_bg of the background is smaller than the
neutron-drip density ρ_drip.
In the previous discussions, we have assumed the baryon
number density in the region near the center to be the nuclear
saturation density n_nucl in symmetric matter, to study the
influence of the electrons on the surface properties of the
transition interface <cit.>. However, the
saturation density in nuclei can be different while changing the
asymmetry parameter (see, e.g., Refs. <cit.>). Therefore, it would be necessary to analyze the
influence of the baryon number density on the surface tensions. The
dependence of the surface tension on the baryon number density in
the region near the center (n_bb) is show in
Fig. <ref>, for the case of P^F_eb = 0.8
P^F_pb and ρ_bg = ρ_drip≈
4.3× 10^11 g cm^-3. Comparing with the results in
Ref. <cit.>, the dependence of the surface tension on the
baryon number density shown in Fig. <ref> for the case of
compressed, superheavy atoms has a similar behavior as in the case
discussed in Ref. <cit.> for the core-crust interface of
neutron stars (n_eb≈ n_pb). Therefore, we can conclude
that the effects of the baryon number density on the surface
tensions σ_Ns and σ_Cs for the case of
compressed, superheavy atoms are similar to the ones for the case
the core-crust interface of neutron stars (n_eb≈ n_pb)
<cit.>.
Furthermore, we show in Table <ref> the
surface tensions of compressed, superheavy atoms for selected
values of P^F_eb/P^F_pb when a smaller baryon
number density in the region near the center is adopted
(n_bb = 0.8 n_nucl). We can learn from Table
<ref> and Fig. <ref> that the dependence of the
surface tension on the ratio P^F_eb/P^F_pb for
the case of compressed, superheavy atoms with a smaller baryon
number density in the region near the center has a similar behavior
as in the case when the baryon number density in the region near the
center is n_nucl.
It is worth mentioning that the properties of
medium-mass and heavy nuclear clusters embedded in a gas of nucleons
were analyzed in Refs. <cit.>. The
calculations varying the cluster size and isospin asymmetry over a
large domain of N (neutron number) and Z covering the whole
periodic table well beyond the neutron drip line, were performed.
The nuclear surface energy per surface nucleon
E_sur/A^2/3 obtained in Refs. <cit.> is in the order of 20 MeV (the value depends on the
parameters such as the asymmetry of the nucleus and the density of
the nucleon gas <cit.>). Comparing
the result shown in Table <ref> and the result obtained
in Refs. <cit.>, compressed, superheavy
atoms under consideration have larger nuclear surface
energies per surface nucleon E_sur/A^2/3. This is mainly
due to the fact that the electromagnetic interaction and the
presence of electrons change the proton and neutron density
profiles, as we have discussed in Ref. <cit.>. As we have
shown in Ref. <cit.>, the nuclear surface tension we
obtained for the case without the presence of electrons matches the
result in literature for ordinary nuclear matter. The trend from
compressed, superheavy atoms to ordinary nuclei is also shown in
Figs. <ref>-<ref> and Table <ref> when
reducing the electron density.
§ SUMMARY
Following our study <cit.> of the surface properties of the
core-crust interface of neutron stars with global charge neutrality,
we study the surface properties of compressed, superheavy atoms.
By compressed, superheavy atom we mean an atom composed by
a superheavy nuclear core (superheavy nucleus) with mass number
of the order of 10^4, and degenerate electrons that neutralize the system.
Some electrons penetrate into the superheavy nuclear core and the rest surround it
up to a distance that depends upon the compression level. We have adopted
both the Thomas-Fermi approximation and RMFT approach and taken into
account the strong, weak, and electromagnetic interactions. We
numerically studied the structure of compressed, superheavy atoms,
computed the nuclear surface tension and Coulomb energy of
compressed, superheavy atoms, and analyzed the influence of the
electron component and the background matter on the properties of
these compressed, superheavy atoms.
We assume at
first the baryon number density in the region near the center to be
the nuclear saturation density n_nucl as in
Ref. <cit.>. We show how the nuclear surface tension
σ_Ns and the Coulomb energy per unit area
σ_Cs are drastically affected by the decreasing of
electron to proton density ratio in the region near the center of
compressed, superheavy atoms (see Figs. <ref>, <ref>,
<ref>, and <ref>). This is due to the increasing of
proton repulsion and the decreasing of nuclear asymmetry when
decreasing electron to proton density ratio in the region near the
center of compressed, superheavy atoms. If the charge separation
is small (i.e., the electron density n_eb in the inside region
is slightly smaller than the proton one n_pb; it means most of
electrons penetrate into nuclear cores), the surface properties are
closed to the ones discussed in Ref. <cit.> for the
core-crust interface of neutron stars (n_eb≈ n_pb). If
the charge separation is large (i.e., the electron density in the
inside region is much smaller than the proton one n_pb; it means
only some of electrons penetrate into nuclear cores), the surface
properties approach to the results without the presence of electrons
inside nuclei, studied in the nuclear physics.
It is also shown (see Figs. <ref>, <ref>, and
<ref>) that electrons in the outside (background) region
have strong effects on the surface properties of compressed,
superheavy atoms. The increase of the electron density in the
outside region effectively reduces the Coulomb energy per unit area
σ_Cs and nuclear the surface tension
σ_Ns even if the density ρ_bg of the
background (the outside region) is smaller than the neutron-drip
density ρ_drip. This effect is enhanced when increasing
difference between the electron density and the proton density in
the region near the center of the system (n_pb-n_eb) (the
inside region). These results show the evidence of strong effects of
the electromagnetic interaction and electrons on the structure of
compressed, superheavy atoms.
Base on the above numerical results, we studied the instability of
compressed, superheavy atoms against Bohr-Wheeler surface
deformations. We find that the instability sets in at a critical
density of the background ρ_bg^crit∼ 9.7
× 10^13 g cm^-3 for compressed, superheavy atoms with
P^F_eb = 0.8 P^F_pb. This critical background
density ρ_bg^crit is smaller than the one
obtained in Ref. <cit.>, where the electron density in the
inside bulk region is nearly equal to the proton one (n_eb≃
n_bp). This implies that the stability of the system can be
decreased by increasing difference between the electron density and
the proton density in the region near the center of compressed, superheavy atoms (n_pb-n_eb).
We also studied the influence of the baryon number
density on the nuclear surface tension and the Coulomb energy per
unit area of compressed, superheavy atoms. The results show that
the effects of the baryon number density on the surface tensions
σ_Ns and σ_Cs for the case of
compressed, superheavy atoms are similar to the ones for the case
the core-crust interface of neutron stars (n_eb≈ n_pb)
<cit.>.
We showed through the Bohr-Wheeler condition,
the possibility of the existence of compressed, superheavy
atoms with A in the order of 10^4. The mass number of
such kind of “exotic" neutron rich nuclei is about one order of
magnitudelarger than the usual neutron rich nuclei of the mass
number being usually up to the order of 10^3, studied in various
models such as pasta structures (see, e.g.,
Refs. <cit.>) and heavy
nuclear clusters (see, e.g., Refs. <cit.>).
Such kind of compressed, superheavy atoms could be possible to appear in
the high density region of the neutron star crust or in the r-processes in gamma-ray bursts,
since their existence is possible according to the Bohr-Wheeler condition as discussed in the present paper.
The results of this work show the effects of the
electrons and electromagnetic interaction on the surface properties
of the system composed of degenerate neutrons, protons, and
electrons fulfilling global charge neutrality. This would give us a
further understanding of the core-crust interface of our new
structure of neutron stars analyzed in Refs. <cit.>.
To end this article, we would like to mention that another kind of
instability in nuclear matter, corresponding to the transition
density from nonuniform to uniform nuclear matter, are widely
discussed in the literature (see, e.g., Refs. <cit.>). When the density reaches
this transition density, the pasta structures become unstable and
are dissolved into uniform matter. The transition density from
nonuniform to uniform nuclear matter is around ∼ 0.08
fm^-3, and strongly depends on approach to obtain it; it
can vary from ∼ 0.1 fm^-3 to ∼ 0.05
fm^-3 in different parameters of nuclear model (see, e.g.,
Refs. <cit.>).
This transition density from nonuniform to uniform nuclear matter is
in the same order of the instability (critical) density obtained in
the present article (baryon number density ∼ 0.05
fm^-3 for the case of P^F_eb = 0.8
P^F_pb) and in Ref. <cit.> (baryon number density
∼ 0.07 fm^-3 for the case of n_eb≃ n_bp
presented in Ref. <cit.>). It would be interesting to
compare and contrast the instability mechanism analyzed in the
present article and in Ref. <cit.> with the one of the
transition density from nonuniform to uniform nuclear matter in the
literature, and analyze the difference and links between these two
instability mechanisms. However, these studies are out of the scope
of this work and we leave these studies for future work.
Yuan-Bin Wu acknowledges the support given by the Erasmus Mundus
Joint Doctorate Program under the Grant Number 2011-1640 from the
EACEA of the European Commission, during which part of this work was
developed.
99
Rueda1 J. A. Rueda, R. Ruffini, and S.-S. Xue, Nucl. Phys. A 872, 286 (2011).
Tolman1 R. C. Tolman, Phys. Rev. 55, 364 (1939).
Oppenheimer1 J. R. Oppenheimer and G. Volkoff, Phys. Rev. 55, 374 (1939).
RuedaPLB M. Rotondo, J. A. Rueda, R. Ruffini, and S.-S. Xue, Phys. Lett. B 701, 667 (2011).
Duerr1 H. P. Duerr, Phys. Rev. 103, 469 (1956).
Miller1 L. D. Miller and A. E. S. Green, Phys. Rev. C 5, 241 (1972).
Walecka1 J. D. Walecka, Ann. Phys. 83, 491 (1974).
Boguta1 J. Boguta and J. Rafelski, Phys. Lett. B 71, 22
(1977).
Boguta2 J. Boguta and A. R. Bodmer, Nucl. Phys. A 292, 413 (1977).
Boguta3 J. Boguta and H. Stocker, Phys. Lett. B 120, 289 (1983).
Boguta4 J. Boguta and S. A. Moszkowski, Nucl. Phys. A 403, 445 (1983).
Boguta5 J. Boguta, Nucl. Phys. A 501, 637 (1989).
Serot1 B. D. Serot, Rept. Prog. Phys. 55, 1855 (1992).
Ring1 P. Ring, Prog. Part. Nucl. Phys. 37, 193 (1996).
Bender1 M. Bender, P.-H. Heenen, and P.-G. Reinhard, Rev. Mod. Phys. 75, 121 (2003).
Maruyama1 T. Maruyama, T. Tatsumi, D. N. Voskresensky, T. Tanigawa, and S. Chiba,
Phys. Rev. C 72, 015802 (2005).
Oyamatsu2007 K. Oyamatsu and K. Iida, Phys. Rev. C 75, 015801 (2007).
Avancini1 S. S. Avancini, D. P. Menezes, M. D. Alloy,
J. R. Marinelli, M. M. W. Moraes, and C. Providência, Phys. Rev C
78, 015802 (2008).
Okamoto1 M. Okamoto, T. Maruyama, K. Yabana, and
T. Tatsumi, Phys. Lett. B 713, 284 (2012).
Grill1 F. Grill, C. Providência, and S. S. Avancini, Phys. Rev. C 85, 055808 (2012).
Bao2014 S. S. Bao and H. Shen, Phys. Rev. C 89, 045807
(2014).
Bao2015 S. S. Bao and H. Shen, Phys. Rev. C 91, 015807
(2015).
Newton2009 W. G. Newton and J. R. Stone, Phys. Rev. C 79, 055801 (2009).
Schuetrumpf2015 B. Schuetrumpf and W. Nazarewicz, Phys. Rev. C 92, 045806 (2015).
Sagert2016 I. Sagert, G. I. Fann, F. J. Fattoyev, S. Postnikov, and C. J. Horowitz, Phys. Rev. C 93, 055801 (2016).
Pais2016 H. Pais and C. Providência, Phys. Rev. C 94, 015808 (2016).
Belvedere1 R. Belvedere, D. Pugliese, J. A. Rueda, R. Ruffini, and S.-S. Xue, Nucl.
Phys. A 883, 1 (2012).
Haensel1 P. Haensel, A. Y. Potekhin, and D. G. Yakovlev, Neutron Stars 1:
Equation of State and Structure, Springer-Verlag, New York, 2007.
Baym1 G. Baym, H. A. Bethe, and C. J. Pethick, Nucl. Phys. A 175, 225 (1971).
Baym2 G. Baym, C. Pethick, and P. Sutherland, Astrophysical Journal 170, 299 (1971).
Brack1 M. Brack, C. Guet, and H.-B. Håkansson, Phys. Rep. 123, 275 (1985).
Sharma1 M. M. Sharma, S. A. Moszkowski, and P. Ring, Phys. Rev. C 44, 2493 (1991).
Eiff1 D. Von-Eiff, J. M. Pearson, W. Stocker, and M. K. Weigel, Phys. Lett. B
324, 279 (1994).
Eiff2 D. Von-Eiff, W. Stocker, and M. K. Weigel, Phys. Rev. C
50, 1436 (1994).
Eiff3 D. Von-Eiff, H. Freyer, W. Stocker, and M. K. Weigel, Phys. Lett. B
344, 11 (1995).
Centelles1 M. Centelles, M. Del Estal, and X. Viñas, Nucl. Phys. A 635, 193 (1998).
Estal2 M. Del Estal, M. Centelles, and X. Viñas, Nucl. Phys. A
650, 443 (1999).
Patra1 S. K. Patra, M. Centelles, X. Viñas, and M. Del Estal, Phys. Rev. C
65, 044304 (2002).
Danielewicz1 P. Danielewicz and J. Lee, Nucl. Phys. A 818, 36 (2009).
Christiansen1 M. B. Christiansen, N. K. Glendenning,
and J. Schaffner-Bielich, Phys. Rev. C 62, 025804 (2000).
Alford1 M. Alford, K. Rajagopal, S. Reddy, and F. Wilczek,
Phys. Rev. D 64, 074017 (2001).
RRWX2014 J. A. Rueda, R. Ruffini, Y.-B. Wu, and S.-S. Xue,
Phys. Rev. C 89, 035804 (2014).
Wu2014 Y.-B. Wu, J. Korean Phys. Soc. 65, 850 (2014).
Bohr1 N. Bohr and J. A. Wheeler, Phys. Rev. 56, 426 (1939).
Rotondo2011a M. Rotondo, R. Ruffini, S.-S. Xue, and V. Popov, Int. J. Mod. Phys.
D 20, 1995 (2011).
Rotondo2011b M. Rotondo, J. A. Rueda, R. Ruffini, and S.-S. Xue, Phys. Rev.
C 83, 045805 (2011).
Boguta6 J. Boguta, Phys. Lett. B 106, 255 (1981).
Centelles1990 M. Centelles, M. Pi, X. Viñas, F. Garcias, and
M. Barranco, Nucl. Phys. A 510, 397 (1990).
Shen1998 H. Shen, H. Tokia, K. Oyamatsuc, and
K. Sumiyoshi, Nucl. Phys. A 637, 435 (1998).
Avancini2009 S. S. Avancini, L. Brito, J. R. Marinelli, D. P. Menezes, M. M. W. de Moraes,
C. Providência, and A. M. Santos, Phys. Rev. C 79, 035804
(2009).
Lalazissis1 G. A. Lalazissis, J. König, and P. Ring, Phys. Rev. C 55, 540 (1997).
Tamii1 A. Tamii, et al., Phys. Rev. Lett. 107, 062502 (2011).
Papakonstantinou2013 P. Papakonstantinou, J. Margueron, F. Gulminelli, and
Ad. R. Raduta, Phys. Rev. C 88, 045805 (2013).
Aymard2014 F. Aymard, F. Gulminelli, and J. Margueron,
Phys. Rev. C 89, 065807 (2014).
Horowitz2001 C. J. Horowitz and J. Piekarewicz, Phys. Rev. Lett. 86, 5647 (2001);
Li2008 B.-A. Li, L.-W. Chen, and C. M. Ko, Phys. Rep. 464, 113 (2008).
|
http://arxiv.org/abs/1701.07492v1 | 20170125213044 | Path abstraction | [
"Steve Huntsman"
] | math.CO | [
"math.CO"
] |
steve.huntsman@baesystems.com
BAE Systems, 4301 North Fairfax Drive, Arlington, Virginia 22203
Given the set of paths through a digraph, the result of uniformly deleting some vertices and identifying others along each path is coherent in such a way as to yield the set of paths through another digraph, called a path abstraction of the original digraph. The construction of path abstractions is detailed and relevant basic results are established; generalizations are also discussed. Connections with random digraphs are also illustrated.
Path abstraction
Steve Huntsman
December 30, 2023
=====================
first
§ INTRODUCTION
Each path in a digraph D corresponds to a word over the alphabet V(D). Given a subset U ⊆ V(D), consider the map ∇_U that deletes elements of U from such words. The question naturally arises: does the image of ∇_U correspond to the set of paths in some other digraph?
In this paper, we address this and related questions. The practical motivation is that we have a complicated structure such as a digraph representing the possible flow of some some quantity through a system, and we wish to abstract away irrelevant details while efficiently preserving paths in the structure. For example, we might consider the flow of data <cit.>, taint <cit.> or provenance <cit.> in computer programs or systems. In general, we cannot assume that the structure has a hierarchical or modular organization, and so clustering or decomposition techniques do not solve the task at hand. Instead, we introduce a natural construction (originally proposed by Mukesh Dalal) that can be used in many circumstances to delete irrelevant vertices and identify related vertices (and though of course clustering and decomposition techniques can have something to say in the determination of these vertices, this issue will not be considered here). Subsequently, we can reason about paths in this construction rather than in its larger antecedent structure.
The paper is organized as follows: in <ref>, we introduce basic notation and definitions; <ref> is a sort of appetizer from the point of view of vertices rather than paths; <ref> introduces the key constructions for digraphs, which are generalized in <ref> to weighted digraphs. Random digraphs are considered in <ref>. So-called temporal networks that are essentially time series of arcs are considered in <ref>. Finally, appendices give alternative proofs of key results for digraphs and briefly indicate the potential relevance of our constructions to renormalization and percolation on random digraphs.
§ BASIC NOTATION AND DEFINITIONS
In this paper we generally follow (or at least adapt in an obvious way) the definitions and notation of <cit.> without further comment. In particular, digraphs are loopless, so digraph morphisms are unambiguously defined in terms of their action on vertices. Also, A indicates a set of arcs; the adjacency matrix corresponding to a colored/labeled digraph, directed multigraph, or weighted digraph D is μ_D. We assume that μ_D takes values in an appropriate commutative semiring, e.g., the Boolean semiring for digraphs. We use + and · to denote both ordinary arithmetic and generic semiring operations, while denotes either logical disjunction or maximum depending on context; similarly, denotes either logical conjunction or minimum.
[
Examples of semirings are the Boolean semiring ({0,1}, , ) for digraphs and the real semiring (ℝ, +, ·) for weighted digraphs. Other examples of potential relevance are the tropical semiring (ℝ∪∞, ∧, +) <cit.>, the fuzzy semiring ([0,1], ∨, ∧), the bottleneck semiring (ℝ, ∨ ,∧), the log semiring (ℝ∪∞, +_log, +) with x +_log y := -log(exp(-x)+exp(-y)), the Viterbi semiring ([0,1], ∨, ·), etc. For general background on semirings, see <cit.>. There is no truly standard definition of a semiring, and the same appears to be true of related notions. Rather than invoking necessarily idiosyncratic terminology to remove any ambiguity here, we (perhaps atypically) choose to rely on common sense. For instance, we will require a zero element to indicate the absence of arcs, but this need not be a neutral element for the additive operation. Thus, e.g., we can consider the restriction of the usual tropical semiring to [0,∞).
]
For a digraph D = (V,A) and vertex coloring or labeling ℓ : V →Λ, write D(ℓ) := (ℓ,A) for the corresponding colored digraph, omitting the dependence on ℓ if the desire exists and context allows. Without loss of generality, we shall assume that V(D) = [n] ≡{1,…,n} for some n ∈ℕ.
As a shorthand, for U ⊆ V(D), we shall write D / U ≡ D / D ⟨ U ⟩ for the vertex contraction of U in D. That is, we set V(D / U) := (V(D) ∖ U) ∪{U} and for x,y ∈ V(D) ∖ U,
μ_D / U(x,{U}) := ∑_u ∈ Uμ_D(x,u),
μ_D / U({U},x) := ∑_u ∈ Uμ_D(u,x),
μ_D / U(x,y) := μ_D(x,y).
Similarly, for disjoint subsets {U_j}_j ∈ [m] of V(D), D / {U_1,…,U_m} := (… (D / U_1)…) / U_m is well-defined.
§ VERTEX ABSTRACTION
A subset L = {L_j}_j ∈ [m]⊆Λ of colors determines a partial partition π_(ℓ,L) of [n] (i.e., a partition of a subset of [n]) as follows: the blocks π_(ℓ,L)^(j) of π_(ℓ,L) are simply the preimages ℓ^-1(L_j) for j ∈ [m]. Conversely, any partial partition of [n] is nonuniquely determined by some π_(ℓ,L), but we can choose a canonical representative for ℓ that makes the correspondence between colorings and partial partitions bijective.
[
Given ℓ, consider [ℓ] := { f : dom f = [n] ∧π_(f,f[n]) = π_(ℓ,Λ)}. By construction, [ℓ] is the equivalence class of functions yielding the same partition of [n]. We can construct a canonical representative ℓ' of [ℓ] as follows: let ℓ' : [n] → [|ℓ([n])|] be such that minℓ'^-1(j) < minℓ'^-1(j+1) for j ∈ [|ℓ([n])|-1]. For example, if (using a standard notation for the Cartesian product of functions for brevity) ℓ^× 5(1,2,3,4,5) = (4,2,0,2,0), the corresponding canonical representative ℓ' is defined by ℓ'^× 5(1,2,3,4,5) = (1,2,3,2,3). This canonical representative is essentially the so-called restricted growth string corresponding to π_(ℓ,Λ) <cit.>.
]
Henceforth we shall assume without loss of generality that ℓ is canonical (and so also surjective) unless otherwise specified.
As usual, let Π_n denote the lattice of partitions of [n]. Following <cit.>, we consider the lattice Π_≤ n of partial partitions ordered by refinement, i.e., for π, π' ∈Π_≤ n, we have π≤π' iff each block of π is contained in a block of π'.
[
NB. The lattice Π_≤ n has a different partial order than the Dowling lattice Q_n(1) ≅Π_n+1 detailed in <cit.>: by way of comparison, in Q_n, π≤π' iff each block of π' is a union of blocks of π.
]
For economy of notation, we shall write |π| for the number of blocks of π∈Π_≤ n.
Define F_≤ n : Π_≤ n→Π_n+1 as follows: if σ = σ^(1) | … | σ^(|σ|)∈Π_≤ n and [n+1] ∖⋃_j=1^|σ|σ^(j) = {s_k}_k ∈ [r], then F_≤ n(σ) := σ^(1) | … | σ^(|σ|) | {s_1} | … | {s_r}. Similarly, define F_n+1 : Π_n+1→Π_≤ n as follows: if τ = τ^(1) | … | τ^(|τ|)∈Π_n+1 and n+1 ∈τ^(u_+), then F_n+1(τ) := τ^(1) | … | τ^(u_+)∖{n+1} | … | τ^(|τ|). An important aspect of the relationship between Π_≤ n and Π_n+1 is captured by the following
Proposition. The pair (F_≤ n, F_n+1) is a (monotone) Galois connection.
Define the support supp π of π∈Π_≤ n as the union of its blocks. Because supp π_(ℓ,L) = ℓ^-1(L), the induced colored digraph D ⟨supp π_(ℓ,L)⟩ is well-defined and π_(ℓ,L) = π_(ℓ,L)^(1) | … | π_(ℓ,L)^(|π_(ℓ,L)|) determines a colored digraph via
D_⟨ℓ,L ⟩ := D ⟨supp π_(ℓ,L)⟩ / {π_(ℓ,L)^(1), …, π_(ℓ,L)^(|π_(ℓ,L)|)}.
The intuition behind (<ref>) is simply that vertices in the jth block π_(ℓ,L)^(j) are identified (note that the order in which these identifications take place is immaterial, and that the resulting coloring will generally not be canonical).
Definition. Call D_⟨ℓ,L ⟩ the vertex abstraction of D with respect to L.
Example. Consider the following colored digraph D:
[scale=1.25,->,>=stealth',shorten >=1pt]
[draw,circle,minimum size=5mm] (v01) at (0,0) 1;
[draw,circle,minimum size=5mm] (v02) at (1,0) 2;
[draw,circle,minimum size=5mm] (v03) at (2,0) 1;
[draw,circle,minimum size=5mm] (v04) at (3,0) 3;
/→in
v01/v02, v02/v03, v03/v04
() to (→);
We have the following table:
1.5
L ∅ {1} {2} {3} {1,2} {1,3} {2,3} {1,2,3}
D_⟨ℓ,L ⟩
[scale=1.25,->,>=stealth',shorten >=1pt]
[draw,color=white,circle,minimum size=5mm] (v00) at (0,0.2) ;
[draw,circle,minimum size=5mm] (v01) at (0,0) 1;
[scale=1.25,->,>=stealth',shorten >=1pt]
[draw,circle,minimum size=5mm] (v02) at (1,0) 2;
[scale=1.25,->,>=stealth',shorten >=1pt]
[draw,circle,minimum size=5mm] (v03) at (2,0) 3;
[scale=1.25,->,>=stealth',shorten >=1pt]
[draw,circle,minimum size=5mm] (v01) at (0,0) 1;
[draw,circle,minimum size=5mm] (v02) at (1,0) 2;
/→in
v01/v02, v02/v01
() to (→);
[scale=1.25,->,>=stealth',shorten >=1pt]
[draw,circle,minimum size=5mm] (v01) at (0,0) 1;
[draw,circle,minimum size=5mm] (v04) at (1,0) 3;
/→in
v01/v04
() to (→);
[scale=1.25,->,>=stealth',shorten >=1pt]
[draw,circle,minimum size=5mm] (v02) at (0,0) 2;
[draw,circle,minimum size=5mm] (v04) at (1,0) 3;
[scale=1.25,->,>=stealth',shorten >=1pt]
[draw,circle,minimum size=5mm] (v02) at (0,0) 2;
[draw,circle,minimum size=5mm] (v01) at (1,0) 1;
[draw,circle,minimum size=5mm] (v03) at (2,0) 3;
/→in
v01/v02, v02/v01, v01/v03
() to (→);
It is easy to see that the map π_(ℓ,·) : 2^Λ→Π_≤ n on the subset lattice 2^Λ is monotone, i.e., if L ⊆ L', then π_(ℓ,L)≤π_(ℓ,L'). Furthermore, there is a surjective morphism χ_ℓ,L',L : D_⟨ℓ,L ⟩→ D_⟨ℓ,L' ⟩ of colored digraphs defined by contracting the preimages of each element of L' ∖ L, and χ_ℓ,L”,L'∘χ_ℓ,L',L = χ_ℓ,L”,L. This and some definition-checking yields the following
Lemma. When endowed with the obvious refinement morphisms and the χ_ℓ,·,·, respectively, {π_(ℓ,L)}_L ⊆Λ and {D_⟨ℓ,L ⟩}_L ⊆Λ are both categories. Furthermore, π_(ℓ,·) : 2^Λ→{π_(ℓ,L)}_L ⊆Λ and D : {π_(ℓ,L)}_L ⊆Λ→{D_⟨ℓ,L ⟩}_L ⊆Λ are both functors that yield equivalences of categories.
In particular, the lattice structure of 2^Λ is duplicated in {π_(ℓ,L)}_L ⊆Λ and (since we are considering colored digraphs) {D_⟨ℓ,L ⟩}_L ⊆Λ. Another noteworthy consequence of this lemma is that the pullback and pushout squares for set inclusions have analogues for partial partitions and vertex abstractions. For example,
Proposition. Let L_1, L_2 ⊆ L. The pullback of χ_ℓ,L,L_1 and χ_ℓ,L,L_2 is given by χ_ℓ,L_1,L_1 ∩ L_2 and χ_ℓ,L_2,L_1 ∩ L_2.
§ DETOURS, BYPASSES, AND PATH ABSTRACTIONS
For v ∈ V(D), define
V(D)_v^- := { x ∈ V(D) ∖{v} : μ_D(x,v) 0 ∧μ_D(v,x) = 0 };
V(D)_v^± := { x ∈ V(D) ∖{v} : μ_D(x,v) 0 ∧μ_D(v,x) 0 };
V(D)_v^+ := { x ∈ V(D) ∖{v} : μ_D(x,v) = 0 ∧μ_D(v,x) 0 };
V(D)_v^0 := { x ∈ V(D) ∖{v} : μ_D(x,v) = 0 ∧μ_D(v,x) = 0 },
noting that {{v}, V(D)_v^-, V(D)_v^±, V(D)_v^+, V(D)_v^0 } forms a partition of V(D). In particular, P(D)_v := V(D)_v^- ∪ V(D)_v^± is the set of predecessors of v, S(D)_v := V(D)_v^±∪ V(D)_v^+ is the set of successors of v, and P(D)_v ∩ S(D)_v = V(D)_v^±. (See figure <ref>.)
Define the detour at v by D ↑ v by V(D ↑ v) := V(D) and
μ_D ↑ v(x,y) :=
0 if (x,y) ∈ [P(D)_v ×{v}] ∪ [{v}× S(D)_v];
1 if (x,y) ∈ [P(D)_v × S(D)_v] ∖Δ(V(D));
μ_D(x,y) otherwise.
(Here as usual Δ denotes the diagonal functor.) That is, the detour at v is formed by deleting all arcs involving v and inserting arcs from every predecessor of v to every distinct successor of v. Finally, define the bypass at v by D ⇈ v := (D ↑ v) - v. (See figure <ref>.) By construction we have the following
Proposition. If u, v, and w are distinct elements of V(D) and there is a path in D from u to w, then there are also paths in both D ↑ v and D ⇈ v from u to w.
Lemma. If D ⇈ v is strongly connected and P(D)_v, S(D)_v ∅, then D is strongly connected.
Proof. Let x,y ∈ [n] ∖{v} and let γ(x,y) be a path in D ⇈ v from x to y. If γ(x,y) does not contain an arc of the form (u,w) for some u ∈ P(D)_v and w ∈ S(D)_v, then it lifts to a path from x to y in D, so assume otherwise. Now γ(x,y) is a concatenation of paths of the form γ(x,u) γ(u,w) γ(w,y), which corresponds to a path concatenation of the form γ(x,u) γ(u,v) γ(v,w) γ(w,y) in D.
It is easy to see that D ↑ v and D ⇈ v both contain a complete digraph with vertex set V(D)_v^±: thus if this set has more than a single element, the detour and bypass are necessarily cyclic, and their transitive reductions may not be unique. Similarly, there are cases where D ↑ v has multiple Hamilton cycles (which are also transitive reductions). Therefore in general there is no unique minimal digraph with the path preservation property of the preceding proposition. On the other hand, we have the following
Proposition. If D is acyclic (so that in particular V(D)_v^±≡∅), then so are D ↑ v and D ⇈ v. Furthermore, in this event the transitive reductions of D ↑ v and D ⇈ v are the unique minimal digraphs on the respective vertex sets V(D) and V(D) ∖{v} such that if u, v, and w are distinct elements of V(D) and there is a path in D from u to w, then there are also paths in both D ↑ v and D ⇈ v from u to w.
There are cases where D ↑ v and D ⇈ v are their own transitive reductions: e.g., consider a digraph D with only the three arcs (u,v), (v,w), and (u,w): the only arc in D ↑ v is (u,w). With this in mind, there is a sense in which the detour and bypass can be considered optimal (though typically not minimal) constructions with respect to path preservation in generic digraphs.
Lemma. (D ↑ v) ↑ w = (D ↑ w) ↑ v.
Proof. See <ref> for a naive case analysis. We will prove a more general result in <ref> in a much more elegant and concise manner.
Theorem. For U = {u_j}_j ∈ [m]⊆ V(D) the obvious generalizations of detour D ↑ U := ( … (D ↑ u_1) …↑ u_m-1) ↑ u_m and bypass D ⇈ U := (D ↑ U) - U are well-defined.
Surprisingly, the only reference we could find that even suggests the detour/bypass constructions is <cit.>, which seems to take the preceding theorem for granted.
Note that the construction of D ↑ U is not so simple as removing all arcs involving a vertex in U, then inserting arcs from every external predecessor of a vertex in U to every distinct external successor of a vertex in U. For example, consider D given by the path of length 3, i.e. D =
[scale=.5,->,>=stealth',shorten >=1pt]
(v1) at (1,0);
(v2) at (2,0);
(v3) at (3,0);
(v4) at (4,0);
(v1) [out=30,in=150,looseness=1] to (v2);
(v2) [out=30,in=150,looseness=1] to (v3);
(v3) [out=30,in=150,looseness=1] to (v4);
and U the set whose members are the source and target vertices of D. Then D ⇈ U =
[scale=.5,->,>=stealth',shorten >=1pt]
(v2) at (2,0);
(v3) at (3,0);
(v2) [out=30,in=150,looseness=1] to (v3);
while the naive procedure mentioned just above yields [scale=.5,->,>=stealth',shorten >=1pt]
(v2) at (2,0);
(v3) at (3,0);
(v2) [out=30,in=150,looseness=1] to (v3);
(v3) [out=-150,in=-30,looseness=1] to (v2);. Another example is shown in figure <ref>.
Corollary. If U ⊆ V(D) and D is acyclic, then so are D ↑ U and D ⇈ U. Furthermore, in this event the transitive reductions of D ↑ U and D ⇈ U are the unique minimal digraphs on the respective vertex sets V(D) and V(D) ∖ U such that if v,w ∈ V(D) ∖ U are distinct and there is a path in D from v to w, then there are also paths in both D ↑ U and D ⇈ U from v to w.
Recall that for a directed pseudograph or quiver Q, the free category F(Q) is the category with objects given by vertices of Q and morphisms given by paths in Q, with composition given by path concatenation.
Proposition. For U ⊆ V(D), there is a functor F(D) → F(D ↑ U) defined on objects as the identity map and on morphisms as the map which deletes elements of U from paths.
[
As suggested in <ref>, one might consider using this proposition as the foundation of the ⇈ operator: i.e., given a language ℒ over [n] whose elements correspond to paths in some digraph D, define ℒ⇈ U to be the language over [n] ∖ U obtained by deleting all symbols in U from elements of ℒ. However, it is not obvious from this essentially automata-theoretical persepective that the elements of ℒ⇈ U themselves correspond to paths in some (here) notional digraph D ⇈ U. It seems that a path coherence result of the type needed to establish such a fact would be tantamount to the preceding theorem.
]
Example. The digraphs D and D ⇈{5,7} depicted in figure <ref> are DAGs. A quick calculation shows that each has 7 paths from a source to a target: the correspondence between them is shown in the table below.
path in D corresponding path in D ⇈{5,7}
3 → 5 → 2 3 → 2
3 → 5 → 1 → 7 → 2 3 → 1 → 2
3 → 5 → 1 → 4 → 7 → 2 3 → 1 → 4 → 2
3 → 8 3 → 8
3 → 5 → 6 → 8 3 → 6 → 8
3 → 5 → 1 → 7 → 8 3 → 1 → 8
3 → 5 → 1 → 4 → 7 → 8 3 → 1 → 4 → 8
Example. Consider the digraph D in figure <ref>. There are four interesting cycles: 1 → 3 → 1 ≡ 131 (omitting arrows for concision), 1231, 1341, and 12341. These are respectively mapped under ⇈ 3 to 1 (not a cycle!), 121, 141, and 1241. Subsequently contracting vertices 2 and 4 à la (<ref>) maps the remaining cycles in turn to the single cycle 121. This extends to a mapping on all paths, e.g. the path 1231341234 maps under ⇈ 3 to 1214124 and subsequently under the contraction of vertices 2 and 4 to 121212.
From the proposition we see that for L ⊆ L' there is a surjective morphism ϕ_ℓ,L,L' : F(D ↑ℓ^-1(L)) → F(D ↑ℓ^-1(L')) defined by sending vertices of D to themselves and deleting elements of L' ∖ L from paths. Therefore we get the following
Lemma. When endowed with the morphisms ϕ_ℓ,·,·, {F(D ↑ℓ^-1(L))}_L ⊆Λ is a category and the map F_D,ℓ : L ↦ F(D ↑ℓ^-1(L)) is a functor yielding an equivalence of categories.
We close this section by showing that detour/bypass and contraction operations on disjoint vertex sets commute.
Lemma. If u, v, w ∈ V(D) are distinct, then (D ↑ u) / {v,w} = (D/{v,w}) ↑ u.
Proof. See <ref> for a naive case analysis. We will prove a more general result in <ref> in a much more elegant and concise manner.
Theorem. The detour and bypass operations commute with disjoint graph contractions: if U, V_1, …, V_m ⊆ V(D) are disjoint, then (D ↑ U) / {V_1,…,V_m} = (D / {V_1,…,V_m}) ↑ U, and similarly for bypasses.
Thus we can freely make the following
Definition. The path abstraction of D with respect to π∈Π_≤ |V(D)| is
D ⇈π := (D ⇈ (V(D) ∖supp π)) / {π^(1),…,π^(|π|)} = (D / {π^(1),…,π^(|π|)}) ⇈ (V(D) ∖supp π).
We shall write D ⇈ (ℓ,L) := D ⇈π_(ℓ,L).
Unfortunately, for L ⊆ L' there is not a simple relationship between D ⇈ (ℓ, L) and D ⇈ (ℓ, L'). To see this, let L ⊂Λ, let ℓ_0 ∈Λ∖ L, and let L' := L ∪{ℓ_0}. If π_(ℓ,L) = π_(ℓ,L)^(1) | … | π_(ℓ,L)^(|π_(ℓ,L)|) and π_(ℓ,L') = π_(ℓ,L')^(1) | … | π_(ℓ,L')^(|π_(ℓ,L')|) = π_(ℓ,L)^(1) | … | π_(ℓ,L)^(|π_(ℓ,L)|) | π_(ℓ,L')^(|π_(ℓ,L)|+1), defining π_(ℓ,L')^(0) := V(D) ∖⋃_j ∈ [|π_(ℓ,L)|+1]π_(ℓ,L')^(j) yields a partition {π_(ℓ,L')^(j)}_j = 0^|π_(ℓ,L)|+1 of V(D), and it is readily seen that D ⇈ (ℓ, L) = ((D ⇈π_(ℓ,L)^(0)) / {π_(ℓ,L)^(1),…,π_(ℓ,L)^(|π_(ℓ,L)|)}) ⇈π_(ℓ,L')^(|π_(ℓ,L)|+1) while D ⇈ (ℓ,L') = ((D ⇈π_(ℓ,L)^(0)) / {π_(ℓ,L)^(1),…,π_(ℓ,L)^(|π_(ℓ,L)|)}) ∖π_(ℓ,L')^(|π_(ℓ,L)|+1). That is, the essential distinction between D ⇈ (ℓ,L) and D ⇈ (ℓ,L') is that the former has a bypass while the latter has a contraction. These two operations are typically not readily comparable, and so we defer the quest for a structure theory of path abstractions.
§ WEIGHTED DIGRAPHS
Generalizing the constructions of <ref> to weighted digraphs introduces some subtleties. However, it also leads to simpler proofs.
As a preliminary step, consider the case of directed multigraphs. For a directed multigraph D, there is a unique minimal subdivision D^⊘ of D that is a digraph. Thus for v ∈ V(D), D^⊘↑ v is well-defined. For x,y ∈ V(D), write ν_D(x,y) for the number of walks in D from x to y, so that ν_D(x,y) < ∞ iff x and y are not in the same strong component of D (of course, this is automatically the case if D is acyclic [which also implies that D^⊘ and D^⊘↑ v are acyclic]). For x,y ∈ V(D), it is clear that ν_D(x,y) = ν_D^⊘(x,y), and moreover that any reasonable definition of D ↑ v should satisfy ν_D^⊘↑ v(x,y) = ν_D ↑ v(x,y). Meanwhile, note that (<ref>) is equivalent to
μ_D ↑ v(x,y) =
0 if (x,y) ∈ [V(D) ×{v}] ∪ [{v}× V(D)] ∪Δ(V(D));
μ_D(x,y) ∨ (μ_D(x,v) ∧μ_D(v,y)) otherwise.
These considerations indicate that for directed multigraphs, we should simply replace ∨ with + and ∧ with · à la
μ_D ↑ v(x,y) :=
0 if (x,y) ∈ [V(D) ×{v}] ∪ [{v}× V(D)] ∪Δ(V(D));
μ_D(x,y) + μ_D(x,v) ·μ_D(v,y) otherwise.
To generalize further from directed multigraphs to weighted digraphs, the addition and multipication operations above can be taken to be those of a commutative semiring that the weights are presumed to inhabit, and μ_D can be taken to indicate the weights (or for the further generalization of a weighted directed multigraph, the appropriate sum of weights). However, while the RHS of (<ref>) is always well-defined, in many circumstances it leads to behavior that is more troublesome than for the special case of unweighted digraphs.
For convenience, we shall write, e.g., μ_xy := μ_D(x,y) in the rest of this section.
Theorem. (D ↑ v) ↑ w (D ↑ w) ↑ v iff μ_vwμ_wv 0 and there exists (x,y) ∈ V(D)^2 ∖ ([V(D) ×{v,w}] ∪ [{v,w}× V(D)] ∪Δ(V(D))) such that μ_xvμ_vyμ_xwμ_wy.
Proof. We have that
μ_(D ↑ v) ↑ w(x,y) :=
0 if (x,y) ∈ [V(D) ×{v,w}] ∪ [{v,w}× V(D)] ∪Δ(V(D));
μ_D ↑ v(x,y) + μ_D ↑ v(x,w) ·μ_D ↑ v(w,y) otherwise.
Applying (<ref>) twice shows that the nonzero entries of μ_(D ↑ v) ↑ w(x,y) are of the form
μ_xy + μ_xvμ_vy + μ_xwμ_wy + μ_xvμ_vwμ_wy + μ_xwμ_wvμ_vy + μ_xvμ_vwμ_wvμ_vy.
The expression above is not symmetric under the exchange of v and w owing purely to the last term (note that in the special case of digraphs addressed in <ref>, the concomitant Boolean semiring operations recover this lost symmetry as required since the last term is nonzero only if the second term is also). The conditions under which there exist x,y such that x y and μ_xvμ_vwμ_wvμ_vyμ_xwμ_wvμ_vwμ_wy are stated in the theorem.
Example. Consider the multigraph D shown in figure <ref>. With v = 1 and w = 4, the terms μ_xvμ_vwμ_wvμ_vy and μ_xwμ_wvμ_vwμ_wy respectively correspond to the matrices
[ 0; 0; 2; 1 ]·[ 1 ]·[ 1 ]·[ 0 1 0 1 ]
=
[ 0 0 0 0; 0 0 0 0; 0 2 0 2; 0 1 0 1 ] and [ 1; 0; 0; 0 ]·[ 1 ]·[ 1 ]·[ 1 0 0 0 ]
=
[ 1 0 0 0; 0 0 0 0; 0 0 0 0; 0 0 0 0 ].
These have entries in V(D)^2 ∖ ([V(D) ×{v,w}] ∪ [{v,w}× V(D)] ∪Δ(V(D))) = {(2,3),(3,2)} that differ.
Corollary. If D is acyclic and U ⊆ V(D), then D ↑ U is well-defined and acyclic.
There may be situations of practical interest in which D is not acyclic, which necessarily complicates any criterion for establishing the existence of a well-defined detour/bypass. We proceed below to establish the most obvious criterion in this vein.
Lemma. If D has no 2-cycles at all and no 3-cycles intersecting v, then D ↑ v has no 2-cycles.
Proof. Suppose that μ_D ↑ v(x,y) ·μ_D ↑ v(y,x) 0. By hypothesis, μ_xyμ_yx = 0, so without loss of generality assume that μ_xy = 0. Now
μ_D ↑ v(x,y) ·μ_D ↑ v(y,x) = μ_xvμ_vy (μ_yx + μ_yvμ_vx) 0.
In particular, μ_xv 0 μ_vy.
If μ_yx = 0, then we must have that μ_xvμ_vx 0 μ_vyμ_yv, contradicting the assumption that D has no 2-cycles. If on the other hand μ_yx 0, then either μ_yvμ_vx = 0 or μ_yvμ_vx 0. In the first case, μ_xvμ_vyμ_yx 0, contradicting the assumption that D has no 3-cycles intersecting v. In the second case, we again contradict the assumption that D has no 2-cycles. Thus it must be that μ_D ↑ v(x,y) ·μ_D ↑ v(y,x) = 0, i.e., D ↑ v has no 2-cycles.
While it is tempting to spend the effort to recast the preceding lemma as the base case of an induction, the complexity of finding short cycles in digraphs suggests that any theorem actually resulting from such an exercise would be less useful in practice than simply checking online whether or not successive detours commute. For this reason we elect to move on to the following more useful result:
Theorem. If u, v, w ∈ V(D) are distinct, then (D ↑ u) / {v,w} = (D/{v,w}) ↑ u.
Proof. Write X := V(D) ∖{u,v,w}, so that μ_D takes the form
μ_D u v w X
u 0 μ_uv μ_uw μ_uX
v μ_vu 0 μ_vw μ_vX
w μ_wu μ_wv 0 μ_wX
X μ_Xu μ_Xv μ_Xw μ_XX.
For a generic square matrix M and n-tuple z, let d(M)_j := M_jj and d(z)_jk := z_j δ_jk, i.e., d is the obvious notion of a diagonal map in both cases. Define M^ := M - d(d(M)). We then have that μ_D ↑ u and μ_D / {v,w} are respectively
μ_D ↑ u u v w X
u 0 0 0 0
v 0 0 μ_vw + μ_vuμ_uw μ_vX + μ_vuμ_uX
w 0 μ_wv + μ_wuμ_uv 0 μ_wX + μ_wuμ_uX
X 0 μ_Xv + μ_Xuμ_uv μ_Xw + μ_Xuμ_uw μ_XX + (μ_Xuμ_uX)^;
μ_D / {v,w} u vw X
u 0 μ_uv + μ_uw μ_uX
vw μ_vu + μ_wu 0 μ_vX + μ_wX
X μ_Xu μ_Xv + μ_Xw μ_XX.
Consequently, μ_(D ↑ u) ∖{v,w} and μ_(D / {v,w}) ↑ u are respectively
μ_(D ↑ u) ∖{v,w} u vw X
u 0 0 0
vw 0 0 μ_vX + μ_vuμ_uX + μ_wX + μ_wuμ_uX
X 0 μ_Xv + μ_Xuμ_uv + μ_Xw + μ_Xuμ_uw μ_XX + (μ_Xuμ_uX)^;
μ_(D / {v,w}) ↑ u u vw X
u 0 0 0
vw 0 0 μ_vX + μ_wX + (μ_vu + μ_wu) μ_uX
X 0 μ_Xv + μ_Xw + μ_Xu (μ_uv + μ_uw) μ_XX + (μ_Xuμ_uX)^.
These are obviously equal.
Corollary. If D is acyclic, then its path abstraction is well-defined.
§ RANDOM DIGRAPHS
For 0 ≤ p ≤ 1, let D_n,p denote the random digraph <cit.> with V(D_n,p) = [n] and independent probabilities ℙ(μ_D_n,p(x,y) = 1 | x y) ≡ p.
Let u ∈ [n]. According to (<ref>), there are two ways for the event μ_D_n,p⇈ u(x,y) = 1 to occur for x y: either (x,y) ∈ P(D_n,p)_u × S(D_n,p)_u, or (x,y) ∉ P(D_n,p)_u × S(D_n,p)_u but μ_D_n,p(x,y) = 1. These two subevents are disjoint, with the probability of the former equal to p^2 and the probability of the latter equal to (1-p^2)p, so we have that ℙ ( μ_D_n,p⇈ u(x,y) = 1 | x y ) = p^2 + (1-p^2)p =: f(p). It follows for that for U ⊆ [n]
ℙ ( μ_D_n,p⇈ U(x,y) = 1 | x y ) = f^∘ |U|(p),
where a |U|-fold composition is indicated on the RHS (see figure <ref>). That is,
D_n,p⇈ U = D_n-|U|,f^∘ |U|(p).
This suggests the possibility of a renormalization group strategy for studying D_n,p (and percolation thresholds in particular), but we limit our discussion of this to a terse remark in <ref>.
A qualitative approximation for f^∘ N(p) is readily obtained by the following tactic described in <cit.>: temporarily writing p(N) := f^∘ N(p), we have that p(N+1)-p(N) = p^2(N) - p^3(N). Treating N as a continuous parameter yields the approximation dp/dN≈ p^2-p^3. Writing F(p) := logp/1-p - 1/p and noting that dF/dp = 1/p^2-p^3 yields that
f^∘ N(p) ≈ F^-1(N+F(p)).
A sophisticated but interesting restatement of (<ref>) is that F is approximately equivariant with respect to the ℤ-actions on [0,1] and ℝ given respectively by iterating f and adding.
[
The approximate ℤ-equivariance expressed in (<ref>) obviously generalizes to other functions f and F such that F'(p) = (f(p)-p)^-1. However, we are not aware of results corresponding to this fact or its further generalization to higher-order differences.
]
Meanwhile, the event μ_D_n,p/U(x,{U}) = 1 occurs for x {U} iff there is some u ∈ U such that μ_D_n,p(x,u) = 1. Equivalently, the event μ_D_n,p/U(x,{U}) = 0 occurs iff μ_D_n,p(x,u) = 0 for all u ∈ U, and this event clearly has probability (1-p)^|U|. Thus
ℙ ( μ_D_n,p/U(x,{U}) = 1 | x {U} ) = 1 - (1-p)^|U|.
More generally, if π∈Π_n, then
ℙ ( μ_D_n,p/{π^(1),…,π^(|π|)}({π^(j)},{π^(k)}) = 1 | j k ) = 1 - (1-p)^|π^(j)| · |π^(k)|.
We can combine the preceding observations into the following
Theorem. If π∈Π_≤ n, then
ℙ ( μ_D_n,p⇈π({π^(j)},{π^(k)}) = 1 | j k ) =
1 - ( 1-f^∘ (n-|supp π|)(p) )^|π^(j)| · |π^(k)|.
In particular, the expected number of arcs in D_n,p⇈π is the sum over j k of the RHS of (<ref>) (note that the number of vertices is just |π|).
Example. Let n = 28 and p = 0.05, and suppose that the graph D in figure <ref> is a realization of D_n,p. (Note that D has |A(D)| = ∑_x,yμ_D(x,y) = 40 arcs and hence |A(D)|/n(n-1)≈ 0.0529 ≈ p. Furthermore, 12 vertices have indegree 2 and the rest have indegree 1; similarly, 12 vertices have outdegree 2 and the rest have outdegree 1, so D_n,p is at least superficially appropriate here as a model for D.) The partial partition π corresponding to the path abstraction in figure <ref> has n - |supp π| = 4, f^4(p) ≈ 0.0578, and (|π^(1)|,…,|π^(|π|)|) = (4,3,2,2,1,2,1,2,2,3,2). The expected number of arcs in D_n,p⇈π turns out to be approximately 25.9635, whereas the actual number of arcs in D ⇈π is 27. (Taking instead the value p = 0.0529 yields an expected number of arcs approximately equal to 27.4466.)
Example. Let n = 1000, p = 0.01, and consider U a uniformly random subset of [n] subject to |U| = 50. Figure <ref> demonstrates (<ref>) using the estimate p̂ = |A(D_n,p)|/n(n-1) for 1000 realizations of D_n,p⇈ U.
Example. As an example for which D_n,p is a manifestly inappropriate model, consider the digraph D' with vertices corresponding to airports and arcs corresponding to regularly scheduled passenger flights. We constructed this digraph using data accessed from on 3 May 2016, yielding n' := |V(D')| = 8107 and |A(D')| = 37187 arcs, corresponding to an empirical arc probability of p' := |A(D)|/n(n-1)≈ 5.6588 · 10^-4. We define ℓ' : V(D') →Λ' to be the map coloring airports by country, so that |Λ'| = 240. The in- and out-degrees are very far from uniform, as illustrated in figure <ref>.
Evidently D_n',p' is a very poor model for D'. If π' is the partial partition corresponding to country colors excluding the United States, then |A(D' ⇈π')| = 11036, whereas 𝔼(|A(D_n',p'⇈π')|) ≈ 5125.
In order to get better approximations in such situations it would be necessary to consider a more general random digraph, e.g. the one introduced in <cit.> (ignoring loops). However, the proof of the preceding theorem exploited the commutativity of contracting and bypassing vertices in an essential way that does not generalize to the random digraph of <cit.>. For this reason, obtaining a suitable generalization of the theorem appears to require substantially more effort. Furthermore, applying the resulting theorem would appear to require the same sort of set-theoretic operations as actually constructing the path abstraction outright, largely negating its utility as a predictive tool.
§ TEMPORAL NETWORKS
Digraphs admit a natural temporal generalization called directed temporal contact networks (DTCNs). <cit.> A DTCN with vertex set V ≡ [n] is a pair (𝒟,δ) where 𝒟 is a finite nonempty set and δ : 𝒟→ (V^2 ∖Δ(V)) ×ℝ is injective. The source, target, and time maps (respectively denoted s, t, and τ) are defined so that the following diagram commutes:
[->,>=stealth',shorten >=1pt]
(v01) at (0,0) 𝒟;
(v02) at (0,-4) V^2;
(v03) at (2,-2) (V^2 ∖Δ(V)) ×ℝ;
(v04) at (4,0) ℝ;
(v05) at (4,-4) V^2 ×ℝ;
[->] (v01) edge node [left] s × t (v02);
[->] (v01) edge node [above] δ (v03);
[->] (v01) edge node [above] τ (v04);
[right hook->] (v03) edge node [above] i (v05);
[->] (v05) edge node [below] (π_1 ∘π_1) × (π_2 ∘π_1) (v02);
[->] (v05) edge node [right] π_2 (v04);
That is, each contact c ∈𝒟 corresponds to a unique triple (s(c),t(c),τ(c)), and when convenient we identify contacts and their corresponding triples. We may economically indicate the DTCN (𝒟,δ) merely as 𝒟 or δ; context should suffice to remove any potential for ambiguity here. There is an obvious notion of a temporally coherent path which we do not bother to write out formally.
A naive attempt to generalize the constructions of <ref> to DTCNs might define 𝒟↑ v ≡𝒟⇈ v as
{c ∈𝒟: s(c) v t(c)}∪{ (j,k,τ_vk) : (j,v,τ_jv), (v,k,τ_vk) ∈𝒟τ_jv≤τ_vk j k }.
However, such a definition yields undesirable behavior, as we show in the following
Example. Consider the DTCN 𝒟 := {(1,4,τ_1), (5,4,τ_2), (2,5,τ_3), (4,3,τ_4)} with τ_1 < τ_2 < τ_3 < τ_4. Using the naive definition (<ref>) for 𝒟↑ v leads to (𝒟↑ 4) ↑ 5 = {(1,3,τ_4),(2,3,τ_4)} (𝒟↑ 5) ↑ 4 = {(1,3,τ_4)}. As figure <ref> illustrates, only the latter corresponds to the desirable result for 𝒟↑{4,5}.
An approach that manifestly yields the desired construction deals with the temporal digraph of 𝒟 (see figure <ref> for an example), defined as the digraph T(𝒟) with vertex and arc sets
V(T(𝒟)) := {(v, -∞) : v ∈ V}∪{(v,τ(c)) : (v,c) ∈ V ×𝒟 (s(c) = v t(c) = v) }∪{(v, ∞) : v ∈ V}
A(T(𝒟)) := {((s(c),τ(c)),(t(c),τ(c))) : c ∈𝒟}∪{ ((v,τ_j-1^@v),(v,τ_j^@v)) : v ∈ V, j ∈ [|𝒟@v|-1] }
where the temporal fiber at v is
𝒟@v := {-∞}∪{τ(c) : c ∈𝒟 s(c) = v }∪{τ(c) : c ∈𝒟 t(c) = v }∪{∞}≡{τ_j^@v}_j = 0^|𝒟@v|-1.
Note that |V(T(𝒟))| = ∑_v |𝒟@v| ≤ 2|V| + 2|𝒟| and |A(T(𝒟))| = |V(T(𝒟))| - |V| + |𝒟| ≤ |V| + 3 |𝒟|, so that the temporal digraph of a DTCN can be formed with only linear overhead.
Call the two sets in the union on the RHS of (<ref>) the spatial and temporal arcs of T(𝒟), respectively. Now for U ⊆ V, consider
T(𝒟) ⇈⋃_u ∈ U ( {u}×𝒟@u ).
Each of the non-temporal arcs in this digraph is of the form ((v,τ_j^@v),(v',τ_j'^@v')) for v v' and corresponds to a triple (v,v',τ_j^@vτ_j'^@v'). The set of all such triples defines 𝒟↑ U ≡𝒟⇈ U. That is,
𝒟↑ U ≡𝒟⇈ U := { (v,v',τ_j^@vτ_j'^@v') : v v' ((v,τ_j^@v),(v',τ_j'^@v')) ∈ A ( T(𝒟) ⇈⋃_u ∈ U ( {u}×𝒟@u ) ) }.
Proposition. If τ is a constant map, then 𝒟↑ U can be identified with D ↑ U, where here D indicates the obvious digraph corresponding to 𝒟.
Example. Again, consider the DTCN 𝒟 := {(1,4,τ_1), (5,4,τ_2), (2,5,τ_3), (4,3,τ_4)} with τ_1 < τ_2 < τ_3 < τ_4. Using (<ref>) yields 𝒟↑{4,5} = {(1,3,τ_4)}, as desired. However, it is still the case that (𝒟↑{4}) ↑{5} = {(1,3,τ_4),(2,3,τ_4)} (𝒟↑{5}) ↑{4} = {(1,3,τ_4)}, just as before.
The preceding examples show that although (<ref>) is certainly a reasonable definition for 𝒟↑ U, any reasonable definition of detours/bypasses for DTCNs will lead to noncommutativity that is not present for digraphs. However, there is still a well-defined notion of path abstraction for DTCNs (which necessarily will not commute with successive detours/bypasses) due to the following
Lemma. Detours/bypasses commute with vertex contractions for DTCNs.
Sketch of proof. Let U ∩{v,w} = ∅ and U ∪{v,w}⊆ V. The vertex contraction 𝒟 / {v,w} is defined in the obvious way: triples of the form (v,x,τ_vx) and (x,v,τ_xv) for x ∉{v,w} are replaced with ({v,w},x,τ_vx) and (x,{v,w},τ_xv), respectively, and similarly for triples involving w. Thus (𝒟 / {v,w})@{v,w} = (𝒟@v) ∪ (𝒟@w), so additional vertices and temporal arcs are generated in the formation of T(𝒟 / {v,w}). However, in the formation of T(𝒟), replacing both of the temporal fibers 𝒟@v and 𝒟@w with (𝒟 / {v,w})@{v,w} has no material effect on the subsequent formation of 𝒟↑ U. The lemma now reduces to the already established version for digraphs by contracting vertices with the same time coordinate in each copy of (𝒟 / {v,w})@{v,w}.
The surprising noncommutativity of detours/bypasses for DTCNs is not the only difference from the situation for digraphs.
Example. There are at least two random DTCNs that are obvious analogues of D_n,p (cf. <ref>):
* 𝒟_n,p^(u), with sources and targets corresponding to D_n,p and times uniformly random in [0,1];
* 𝒟_n,p^(P), with contacts between x y Poisson distributed over [0,1] with rate p. That is, the probability of a contact between x y in an interval of infinitesmal duration d τ is given by p · d τ.
It is easy to see that both of these have an expected number of contacts equal to p · n (n-1). Furthermore, for the regime of interest p ≪ 1, these two random DTCNs can be expected to behave quite similarly, akin to the Erdős-Rényi and Gilbert random graphs.
Rather than attempting to develop analytical results, we proceed directly to numerics. The basic observation from figure <ref> is that the number of contacts in 𝒟_n,p^(·)↑ U is much less than the number of arcs in D_n,p↑ U, because there are fewer temporally coherent paths between two vertices in 𝒟_n,p^(·) than there are ordinary paths between the same two vertices in D_n,p. In particular, given {x_j}_j = 0^ℓ, the probability that the path x_0 →…→ x_ℓ exists in D_n,p is p^ℓ, whereas the probability that a temporally coherent version of the same path exists in 𝒟_n,p^(·) is p^ℓ/ℓ!
The author is grateful to Mukesh Dalal for proposing the idea of the paper, and to Yingbo Song for patiently allowing it to unfold. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) and the Air Force Research Laboratory (AFRL). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of DARPA or AFRL.
§ CASE ANALYSIS PROOF OF (D ↑ V) ↑ W = (D ↑ W) ↑ V FOR DIGRAPHS
There are five cases: 1) w = v; 2) w ∈ V(D)_v^-; 3) w ∈ V(D)_v^±; 4) w ∈ V(D)_v^+, and 5) w ∈ V(D)_v^0. Case 1) is trivial, but it is still worth observing that V(D ↑ v)_v^0 = V(D) ∖{v}, whereupon we formally obtain the intuitively obvious fact that (D ↑ v) ↑ v = D ↑ v. Note that case 2) is equivalent to v ∈ V(D)_w^+, so that cases 2) and 4) are equivalent by symmetry; we will address the latter.
Before proceeding with the remaining cases 3), 4), and 5), let us first define
Z(D)_v := [V(D) ×{v}] ∪ [{v}× V(D)]
and
U(D)_v := P(D)_v × S(D)_v.
By construction, we have that μ_D ↑ v(Z_v) ≡ 0 and μ_D ↑ v(U_v ∖Δ(V(D))) ≡ 1. Now
Z(D ↑ v)_w ∪ Z(D)_v = [V(D) ×{v,w}] ∪ [{v,w}× V(D)] = Z(D ↑ w)_v ∪ Z(D)_w
so it suffices to show that
U(D ↑ v)_w ∪ [U(D)_v ∖ Z(D ↑ v)_w] = U(D ↑ w)_v ∪ [U(D)_w ∖ Z(D ↑ w)_v]
or equivalently (writing as usual for symmetric difference)
[U(D ↑ v)_w ∪ U(D)_v] [U(D ↑ w)_v ∪ U(D)_w] ⊆ [V(D) ×{v,w}] ∪ [{v,w}× V(D)].
Case 3: w ∈ V(D)_v^±. In this case (see figure <ref> for a cartoon and note that) we have the following identities:
P(D ↑ v)_w = P(D)_v ∪ [P(D)_w ∖{v}];
S(D ↑ v)_w = S(D)_v ∪ [S(D)_w ∖{v}];
P(D ↑ w)_v = [P(D)_v ∖{w}] ∪ P(D)_w;
S(D ↑ w)_v = [S(D)_v ∖{w}] ∪ S(D)_w.
From these it follows that
U(D ↑ v)_w ∪ U(D)_v ≡ [P(D ↑ v)_w × S(D ↑ v)_w] ∪ [P(D)_v × S(D)_v]
= [P(D)_v × S(D)_v] ∪ [P(D)_v × (S(D)_w ∖{v})]
∪ [(P(D)_w ∖{v}) × S(D)_v] ∪ [(P(D)_w ∖{v}) × (S(D)_w ∖{v})]
and by symmetry
U(D ↑ w)_v ∪ U(D)_w = [P(D)_w × S(D)_w] ∪ [P(D)_w × (S(D)_v ∖{w})]
∪ [(P(D)_v ∖{w}) × S(D)_w] ∪ [(P(D)_v ∖{w}) × (S(D)_v ∖{w})]
so upon inspection (<ref>) is satisfied and case 3) is done.
Case 4: w ∈ V(D)_v^+. In this case (see figure <ref> for a cartoon and note that) we have the following identities:
P(D ↑ v)_w = P(D)_v ∪ [P(D)_w ∖{v}];
S(D ↑ v)_w = S(D)_w;
P(D ↑ w)_v = P(D)_v;
S(D ↑ w)_v = [S(D)_v ∖{w}] ∪ S(D)_w.
From these it follows that
U(D ↑ v)_w ∪ U(D)_v ≡ [P(D ↑ v)_w × S(D ↑ v)_w] ∪ [P(D)_v × S(D)_v]
= [P(D)_v × S(D)_w] ∪ [(P(D)_w ∖{v}) × S(D)_w] ∪ [P(D)_v × S(D)_v]
and
U(D ↑ w)_v ∪ U(D)_w ≡ [P(D ↑ w)_v × S(D ↑ w)_v] ∪ [P(D)_w × S(D)_w]
= [P(D)_v × (S(D)_v ∖{w})] ∪ [P(D)_v × S(D)_w] ∪ [P(D)_w × S(D)_w]
so upon inspection (<ref>) is satisfied and case 4) is done.
Case 5: w ∈ V(D)_v^0. In this case we have the following identities:
P(D ↑ v)_w = P(D)_w;
S(D ↑ v)_w = S(D)_w;
P(D ↑ w)_v = P(D)_v;
S(D ↑ w)_v = S(D)_v.
From these (<ref>) follows trivially, so case 5) is done.
§ CASE ANALYSIS PROOF OF (D ↑ U) / {V,W} = (D / {V,W}) ↑ U FOR DIGRAPHS
The result is trivial unless v and w belong to different sets of the form V(D)_u^∙. It also suffices to show the result for a modified contraction operation (denoted below) that yields identical copies of contracted vertices (note that this is essentially the same technical simplifcation as dealing with detours instead of bypasses). By symmetry, we need only consider the six cases where (v,w) is an element of one of the following products: V(D)_u^- × V(D)_u^±, V(D)_u^- × V(D)_u^+, V(D)_u^- × V(D)_u^0, V(D)_u^±× V(D)_u^+, V(D)_u^±× V(D)_u^0, and V(D)_u^+ × V(D)_u^0.
Consider for instance the first of these cases, where v ∈ V(D)_u^- and w ∈ V(D)_u^±. Writing vw and vw' for the identical copies of contracted vertices, V_u;v,w^∙ as a temporary shorthand for V(D)_u^∙∖{v,w}, and e.g., D {v,w} for the modified contraction, we have the following adjacency matrices (with irrelevant entries omitted):
μ_D u v w V_u;v,w^- V_u;v,w^± V_u;v,w^+ V_u;v,w^0
u 0 0 1 0 1 1 0
v 1 0 · μ_v;- · · μ_v;0
w 1 · 0 μ_w;- · · μ_w;0
V_u;v,w^- 1 · · · · · ·
V_u;v,w^± 1 · · · · · ·
V_u;v,w^+ 0 μ_+;v μ_+;w · · · ·
V_u;v,w^0 0 μ_0;v μ_0;w · · · ·
μ_D ↑ u u v w V_u;v,w^- V_u;v,w^± V_u;v,w^+ V_u;v,w^0
u 0 0 0 0 0 0 0
v 0 0 1 μ_v;- 1 1 μ_v;0
w 0 · 0 μ_w;- 1 1 μ_w;0
V_u;v,w^- 0 · 1 · 1 1 ·
V_u;v,w^± 0 · 1 · 1-I 1 ·
V_u;v,w^+ 0 μ_+;v μ_+;w · · · ·
V_u;v,w^0 0 μ_0;v μ_0;w · · · ·
μ_D {v,w} u vw vw' V_u;v,w^- V_u;v,w^± V_u;v,w^+ V_u;v,w^0
u 0 1 1 0 1 1 0
vw 1 0 0 μ_v;-μ_w;- · · μ_v;0μ_w;0
vw' 1 0 0 μ_v;-μ_w;- · · μ_v;0μ_w;0
V_u;v,w^- 1 · · · · · ·
V_u;v,w^± 1 · · · · · ·
V_u;v,w^+ 0 μ_+;vμ_+;w μ_+;vμ_+;w · · · ·
V_u;v,w^0 0 μ_0;vμ_0;w μ_0;vμ_0;w · · · ·
whereupon both (D ↑ u) / {v,w} and (D/{v,w}) ↑ u can be seen to have the adjacency matrix
u vw vw' V_u;v,w^- V_u;v,w^± V_u;v,w^+ V_u;v,w^0
u 0 0 0 0 0 0 0
vw 0 0 0 μ_v;-μ_w;- 1 1 μ_v;0μ_w;0
vw' 0 0 0 μ_v;-μ_w;- 1 1 μ_v;0μ_w;0
V_u;v,w^- 0 1 1 · 1 1 ·
V_u;v,w^± 0 1 1 · 1-I 1 ·
V_u;v,w^+ 0 μ_+;vμ_+;w μ_+;vμ_+;w · · · ·
V_u;v,w^0 0 μ_0;vμ_0;w μ_0;vμ_0;w · · · ·
and this case is done. The other cases are entirely similar (in fact, the first, second, fourth and fifth cases are nearly identical).
§ REMARK ON RENORMALIZATION
We recall two theorems described in <cit.> regarding D_n,p:
Theorem. If c >1 is constant, then with high probability D_n,c/n contains a unique strong component of size ≈ (1-x/c)^2 n, where x < 1 solves xe^-x = ce^-c. Furthermore, all other strong components are of logarithmic size.
Theorem. lim_n ℙ(D_n,p is strongly connected) = exp(-2e^-lim_n (pn-log n)).
These theorems suggest studying the behavior of (n-N) f^∘ N(c/n) and (n-N) f^∘ N(c+log n/n) for c constant and 0 ≤ N < n. Numerics indicate that for c > 1 the first of these is greater than unity except for N ≈ n, and the second is always greater than unity: see figures <ref> and <ref>.
15
BangJensenGutinBang-Jensen, J., and Gutin, G. Digraphs: Theory, Algorithms and Applications. 2nd ed. Springer (2009).
BloznelisGotzeJaworskiBloznelis, M., Götze, F., and Jaworski, J. “Birth of a strongly connected giant in an imhomogenous random digraph.” J. Appl. Prob. 49, 601 (2012).
CheneyAcarPereraCheney, J., Acar, U. A., and Perera, R. “Toward a theory of self-explaining computation.” In Tannen, V. et al., eds. In Search of Elegance in the Theory and Practice of Computation. Springer (2013).
CountIblis Count Iblis (http://math.stackexchange.com/users/155436/count-iblis), “Good closed form approximation for iterates of x^2+(1-x^2)x”, URL (version: 2016-07-26): http://math.stackexchange.com/q/1872168
DowlingDowling, T. A. “A class of geometric lattices based on finite groups.” J. Combinatorial Th. (B) 14, 61 (1973).
FriezeKaronskiFrieze, A. and Karoński, M. Introduction to Random Graphs. Cambridge (2016).
GlazekGlazek, K. A Guide to the Literature on Semirings and their Applications in Mathematics and Information Sciences. Springer (2002).
HanlonHershShareshianHanlon, P., Hersh, P., and Shareshian, J. “A GL_n(q) analogue of the partition lattice.” Preprint (2009).
KnuthKnuth, D. E. The Art of Computer Programming, vol. 4, fasicle 3. Addison Wesley (2005).
MacLaneMac Lane, S. Categories for the Working Mathematician. 2nd ed. Springer (1978).
MasudaLambiotteMasuda, N. and Lambiotte, R. A Guide to Temporal Networks. World Scientific (2016).
NaumovichClarkeCobleighNaumovich, G., Clarke, L. A., and Cobleigh, J. M.“Using partial order techniques to improve performance of data flow analysis based verification.” PASTE (1999).
NielsonNielsonHankinNielson, F., Nielson, H. R., and Hankin, C. Principles of Program Analysis. Springer (2010).
SchwartzAvgerinosBrumleySchwartz, E. J., Avgerinos, T., and Brumley, D. “All you ever wanted to know about dynamic taint analysis and forward symbolic execution (but might have been afraid to ask).” IEEE Symposium on Security and Privacy (2010).
SpeyerSturmfelsSpeyer, D. and Sturmfels, B. “Tropical mathematics.” Math. Mag. 82, 163 (2009).
|